diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cleanfiles Downloader Exe.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cleanfiles Downloader Exe.md deleted file mode 100644 index a22e76be453d27780811408e9bbaf94ca2e4e3be..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cleanfiles Downloader Exe.md +++ /dev/null @@ -1,27 +0,0 @@ - -

How to Use CleanFiles Downloader to Download Files from CleanFiles.net

-

CleanFiles Downloader is a software program that allows you to download files from CleanFiles.net, a file hosting service that requires you to complete a survey before accessing the download link. CleanFiles Downloader bypasses the survey and lets you download the file directly. Here is how to use CleanFiles Downloader to download files from CleanFiles.net:

-

Cleanfiles Downloader Exe


Download Ziphttps://byltly.com/2uKvR3



-
    -
  1. Download CleanFiles Downloader from https://cleanfiles-downloader.software.informer.com/. This is the official website of the program and it is safe and virus-free[^1^]. You can also check other related programs such as µTorrent, Internet Download Manager, Creevity Mp3 Cover Downloader and MetaProducts Mass Downloader at the "download" section.
  2. -
  3. Install CleanFiles Downloader on your computer. The installation process is simple and straightforward. Just follow the instructions on the screen and accept the terms and conditions. The name of the program executable file is CleanFiles Downloader v5.1.exe.
  4. -
  5. Run CleanFiles Downloader on your computer. You will see a simple interface with a text box where you can enter the URL of the file you want to download from CleanFiles.net.
  6. -
  7. Copy and paste the URL of the file you want to download from CleanFiles.net into the text box. For example, if you want to download a file called example.exe, the URL might look like this: https://cleanfiles.net/?id=1234567890
  8. -
  9. Click on the "Download" button. CleanFiles Downloader will automatically bypass the survey and start downloading the file to your computer. You can see the progress of the download on the status bar.
  10. -
  11. Wait for the download to finish. Once the download is complete, you can find the file in your default download folder or in the folder you specified during the installation. You can then open or run the file as you wish.
  12. -
-

CleanFiles Downloader is a useful tool for downloading files from CleanFiles.net without completing surveys. However, you should be careful about what files you download from CleanFiles.net, as some of them might contain viruses or malware. You should always scan your files with a reliable antivirus program before opening or running them. You should also respect the intellectual property rights of the file owners and only download files that you have permission to use.

- -

How to Remove CleanFiles Downloader from Your Computer

-

If you no longer need CleanFiles Downloader or you want to uninstall it for any reason, you can easily remove it from your computer. Here is how to remove CleanFiles Downloader from your computer:

-
    -
  1. Go to the Start menu and click on Control Panel.
  2. -
  3. Click on Programs and Features or Add/Remove Programs, depending on your version of Windows.
  4. -
  5. Find CleanFiles Downloader in the list of programs and click on it.
  6. -
  7. Click on the Uninstall button and follow the instructions on the screen.
  8. -
  9. Restart your computer if prompted.
  10. -
-

CleanFiles Downloader should be completely removed from your computer. You can also delete any files that you downloaded from CleanFiles.net using CleanFiles Downloader if you don't need them anymore. You should also scan your computer with a reliable antivirus program to make sure that there are no traces of viruses or malware left by CleanFiles Downloader or the files you downloaded from CleanFiles.net.

-

cec2833e83
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/El Inolvidable Simon Birch [DVDRIP][.Spanish.].por.GammaRay.avi.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/El Inolvidable Simon Birch [DVDRIP][.Spanish.].por.GammaRay.avi.md deleted file mode 100644 index e40ebfd9ed512b128fbb4ab27c12f7ad00f83968..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/El Inolvidable Simon Birch [DVDRIP][.Spanish.].por.GammaRay.avi.md +++ /dev/null @@ -1,110 +0,0 @@ - -

El Inolvidable Simon Birch: A Heartwarming Story of Faith and Friendship

-

Have you ever watched a movie that made you laugh, cry, and think at the same time? A movie that touched your heart and inspired your soul? A movie that showed you the beauty of life and the power of faith? If not, then you should definitely watch El Inolvidable Simon Birch, a 1998 American comedy-drama film based on the novel A Prayer for Owen Meany by John Irving. In this article, I will tell you what this movie is about, who are the main characters, what are the themes and messages, and why you should watch it.

-

El Inolvidable Simon Birch [DVDRIP][.Spanish.].por.GammaRay.avi


Download Zip 🆗 https://byltly.com/2uKvIB



-

Introduction

-

What is the movie about?

-

El Inolvidable Simon Birch is a movie about a boy named Simon Birch who was born with a rare condition that made him very small and weak. Despite his physical limitations, he has a strong spirit and a firm belief that God has a special plan for him. He lives in a small town in New Hampshire in the 1960s with his parents who don't care much about him. His only friend is Joe Wenteworth, a boy who was born out of wedlock and doesn't know who his father is. Together, they go through many adventures and challenges as they try to find their purpose in life.

-

Who are the main characters?

-

The main characters of the movie are:

- -

Why is it called El Inolvidable Simon Birch?

-

The movie is called El Inolvidable Simon Birch because it is the Spanish title of the film. The original title was Simon Birch, but it was changed to El Inolvidable Simon Birch for the Spanish-speaking markets. The word "inolvidable" means "unforgettable" in Spanish, which reflects how Simon left a lasting impression on everyone who knew him.

-

Plot Summary

-

Simon's birth and childhood

-

The movie begins with a flashback of Simon's birth in 1952. He was born prematurely and weighed less than two pounds. The doctors told his parents that he would not survive long, but he miraculously did. However, they also said that he would never grow beyond three feet tall and that he would have many health problems throughout his life.

-

Simon grew up feeling different from everyone else. He was often bullied by other kids for his size and appearance. He also had trouble breathing and had to use an oxygen tank sometimes. His parents were ashamed of him and neglected him. They never celebrated his birthday or gave him any presents.

-

The only person who cared for him was Rebecca Wenteworth, Joe's mother. She treated him like her own son and gave him love and attention. She also encouraged him to join the church choir and the Christmas pageant, where he met Joe.

-

Simon's friendship with Joe

-

Simon and Joe became best friends since they were both outsiders in their own way. They shared everything with each other and supported each other through thick and thin. They also had fun together by playing baseball, watching movies, reading comics, and exploring the town.

-

One day, they decided to sneak into Rebecca's bedroom to look for clues about Joe's father. They found a locket with a picture of Rebecca and a man they didn't recognize. They also found a baseball signed by Mickey Mantle, which they assumed belonged to Joe's father.

-

Descargar El Inolvidable Simon Birch DVDRIP en español por GammaRay
-Ver online El Inolvidable Simon Birch película completa español DVDRIP GammaRay
-El Inolvidable Simon Birch DVDRIP español torrent por GammaRay
-El Inolvidable Simon Birch DVDRIP español mega por GammaRay
-El Inolvidable Simon Birch DVDRIP español gratis por GammaRay
-El Inolvidable Simon Birch DVDRIP español calidad por GammaRay
-El Inolvidable Simon Birch DVDRIP español subtitulos por GammaRay
-El Inolvidable Simon Birch DVDRIP español 1 link por GammaRay
-El Inolvidable Simon Birch DVDRIP español full HD por GammaRay
-El Inolvidable Simon Birch DVDRIP español sin cortes por GammaRay
-El Inolvidable Simon Birch DVDRIP español descargar directa por GammaRay
-El Inolvidable Simon Birch DVDRIP español ver pelicula online por GammaRay
-El Inolvidable Simon Birch DVDRIP español descargar pelicula gratis por GammaRay
-El Inolvidable Simon Birch DVDRIP español descargar pelicula torrent por GammaRay
-El Inolvidable Simon Birch DVDRIP español descargar pelicula mega por GammaRay
-El Inolvidable Simon Birch DVDRIP español ver pelicula completa por GammaRay
-El Inolvidable Simon Birch DVDRIP español ver pelicula HD por GammaRay
-El Inolvidable Simon Birch DVDRIP español ver pelicula sin cortes por GammaRay
-El Inolvidable Simon Birch DVDRIP español ver pelicula subtitulada por GammaRay
-El Inolvidable Simon Birch DVDRIP español ver pelicula 1 link por GammaRay
-El Inolvidable Simon Birch película completa en español DVDRIP por GammaRay
-El Inolvidable Simon Birch película en español DVDRIP descargar por GammaRay
-El Inolvidable Simon Birch película en español DVDRIP online por GammaRay
-El Inolvidable Simon Birch película en español DVDRIP torrent por GammaRay
-El Inolvidable Simon Birch película en español DVDRIP mega por GammaRay
-El Inolvidable Simon Birch película en español DVDRIP gratis por GammaRay
-El Inolvidable Simon Birch película en español DVDRIP calidad por GammaRay
-El Inolvidable Simon Birch película en español DVDRIP subtitulos por GammaRay
-El Inolvidable Simon Birch película en español DVDRIP 1 link por GammaRay
-El Inolvidable Simon Birch película en español DVDRIP full HD por GammaRay
-El Inolvidable Simon Birch película en español DVDRIP sin cortes por GammaRay
-El Inolvidable Simon Birch película en español DVDRIP descargar directa por GammaRay
-El Inolvidable Simon Birch película en español DVDRIP ver online gratis por GammaRay
-El Inolvidable Simon Birch película en español DVDRIP ver online HD por GammaRay
-El Inolvidable Simon Birch película en español DVDRIP ver online sin cortes por GammaRay
-El Inolvidable Simon Birch película en español DVDRIP ver online subtitulada por GammaRay
-El Inolvidable Simon Birch película en español DVDRIP ver online 1 link por GammaRay
-Descarga directa de la película El Inolvidable Simon Birch en español DVDRIP por GammaRay
-Ver la película completa de El Inolvidable Simon Birch en español DVDRIP online gratis por GammaRay
-Torrent de la película El Inolvidable Simon Birch en español DVDRIP descargar gratis por GammaRay
-Mega de la película El Inolvidable Simon Birch en español DVDRIP descargar gratis por GammaRay
-Película completa de El Inolvidable Simon Birch en español DVDRIP online HD por GammaRay
-Película de El Inolvidable Simon Birch en español DVDRIP online sin cortes por GammaRay
-Película de El Inolvidable Simon Birch en español DVDRIP online subtitulada por GammaRay
-Película de El Inolvidable Simon Birch en español DVDRIP online 1 link por GammaRay

-

They took the baseball with them to play catch at the lake. However, when Simon threw the ball to Joe, he missed it and hit Rebecca instead, who was on a boat with Ben Goodrich. The ball caused Rebecca to fall into the water and drown.

-

Simon felt guilty for killing Rebecca and wondered if it was part of God's plan for him. Joe was devastated by losing his mother and blamed Simon for her death. He also learned that Ben Goodrich was his father after finding out that he had the same locket as Rebecca.

-

Simon's quest to find his destiny

-

After Rebecca's funeral, Joe moved in with Ben Goodrich while Simon stayed with his parents. They drifted apart for a while until Ben invited Simon to join them on a camping trip. There, they reconciled their friendship and decided to run away together to find Joe's real father.

-

They boarded a bus that took them to another town where they met Miss Leavey (played by Jan Hooks), an old friend of Rebecca who ran an orphanage. She recognized Joe from Rebecca's pictures and offered to help them find Joe's father.

-

She took them to a diner where she introduced them to Mr. Baines (played by Jim Carrey), an adult version of Joe who narrated the story from the beginning. He told them that he never found out who his father was but that he didn't care anymore because he had Ben as his father figure.

-

He also told them that he became a successful writer because of Simon's influence on him. He said that Simon taught him how to see the world differently and how to appreciate life more.

-

Simon's heroic act and death

-

The next day, they went back to their hometown on another bus that was carrying some children from Miss Leavey's orphanage. On their way, they encountered an accident where a truck hit their bus and caused it to plunge into a frozen lake.

-

Simon managed to escape from the bus through a window but saw that many children were still trapped inside. He decided to go back into the water to rescue them one by one using his oxygen tank as an air supply.

-

He saved all the children except one girl named Marjorie (played by Sam Morton), who was too scared to leave her seatbelt. Simon tried to calm her down but ran out of air before he could free her.

-

Joe saw what happened from outside and dived into the water to help them. He reached them just in time before they drowned but couldn't pull them out because they were too heavy.

-

Luckily, Ben arrived at the scene with some firefighters who cut open the bus roof using chainsaws. They pulled out Joe, Simon, Marjorie out of the water along with other survivors.

-

However, it was too late for Simon who died from hypothermia in Joe's arms. Before he died, he told Joe that he finally found his destiny: saving those children from drowning.

-

Themes and Messages

-

The power of faith and belief

-

One of the main themes of the movie is the power of faith and belief. Simon is a character who has a strong faith in God and believes that he has a special mission in life. He doesn't let his physical condition or the negative opinions of others stop him from pursuing his dreams. He also inspires others to have faith and hope in themselves and in a higher purpose.

-

For example, he convinces Joe to believe that his father is someone important and that he can find him someday. He also helps Marjorie overcome her fear of water by telling her that God loves her and that he will protect her. He also shows Reverend Russell that he is wrong about judging him and that he is a true believer.

-

The value of friendship and loyalty

-

Another theme of the movie is the value of friendship and loyalty. Simon and Joe are best friends who share a bond that transcends their differences and circumstances. They are always there for each other and support each other through good times and bad times. They also have fun together and enjoy each other's company.

-

For example, they play baseball together even though Simon is not good at it. They also watch movies together and laugh at the funny scenes. They also run away together to find Joe's father and have an adventure. They also risk their lives for each other when they face danger.

-

The meaning of life and death

-

A third theme of the movie is the meaning of life and death. Simon is a character who has a different perspective on life and death than most people. He doesn't fear death because he believes that it is part of God's plan for him. He also thinks that life is a gift that should be cherished and lived fully.

-

For example, he celebrates his birthday every day because he doesn't know when he will die. He also makes a list of things he wants to do before he dies, such as kissing a girl, seeing the ocean, and being a hero. He also sacrifices his life to save others because he thinks that it is his destiny.

-

Conclusion

-

Why you should watch this movie

-

El Inolvidable Simon Birch is a movie that will make you laugh, cry, and think. It is a movie that will touch your heart and inspire your soul. It is a movie that will show you the beauty of life and the power of faith.

-

You should watch this movie because it will teach you some valuable lessons about friendship, loyalty, courage, belief, purpose, and destiny. You should watch this movie because it will make you appreciate what you have and what you can do. You should watch this movie because it will make you remember Simon Birch, an unforgettable boy who changed the lives of many people.

-

FAQs

-
    -
  1. Q: Is El Inolvidable Simon Birch based on a true story?
    A: No, El Inolvidable Simon Birch is not based on a true story. It is based on a novel called A Prayer for Owen Meany by John Irving. However, some aspects of the movie are inspired by real events or people, such as the bus accident or the actor who played Simon.
  2. -
  3. Q: Who played Simon Birch?
    A: Simon Birch was played by Ian Michael Smith, a boy who was born with Morquio syndrome, the same condition as Simon's character. He was discovered by the director Mark Steven Johnson after seeing his picture in an article about children with rare diseases. He was 11 years old when he made his debut in the movie.
  4. -
  5. Q: What happened to Ian Michael Smith after the movie?
    A: Ian Michael Smith continued his acting career after the movie. He appeared in several TV shows and movies, such as The Secret Agent Club (1996), The Final Season (2007), and The Lurking Man (2017). He also graduated from MIT with a degree in computer science and became a software engineer.
  6. -
  7. Q: Why did John Irving dislike the movie?
    A: John Irving, the author of the novel A Prayer for Owen Meany, disliked the movie adaptation because he felt that it changed too many things from his original story. He didn't like how the characters' names were changed, how the setting was moved from New England to New Hampshire, how some scenes were added or deleted, and how some themes were altered or omitted. He also didn't like how the movie used his title without his permission.
  8. -
  9. Q: Where can I watch El Inolvidable Simon Birch?
    A: You can watch El Inolvidable Simon Birch on various streaming platforms, such as Amazon Prime Video, YouTube, Google Play Movies & TV, iTunes, Vudu, or Hulu. You can also buy or rent it on DVD or Blu-ray.
  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GTA 3 A Masterpiece or a Menace?.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GTA 3 A Masterpiece or a Menace?.md deleted file mode 100644 index d5d92c75973cdb58e399cb483a377211baa1c260..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/GTA 3 A Masterpiece or a Menace?.md +++ /dev/null @@ -1,12 +0,0 @@ -
-

Is GTA 3 Worth It?

-

Grand Theft Auto III, or GTA 3, is a 2001 action-adventure game developed by DMA Design and published by Rockstar Games. It is the third main entry in the Grand Theft Auto series, and the fifth instalment overall. It is set in a fictional city called Liberty City, loosely based on New York City, and follows the story of Claude, a silent protagonist who seeks revenge after being betrayed by his girlfriend during a robbery.

-

GTA 3 is widely considered as one of the most influential and groundbreaking games of its time, as it was the first game in the series to feature a fully 3D open world that players can explore freely. The game offers a variety of missions, activities, vehicles, weapons, and characters to interact with, as well as a darkly comic storyline and a stellar voice acting. The game also features a stunning soundtrack that includes licensed music from various genres and radio stations.

-

is gta 3 worth it


DOWNLOAD ★★★ https://byltly.com/2uKA8T



-

GTA 3 has received critical acclaim from critics and gamers alike, and has won several awards, including Game of the Year from various publications. It has also sold over 14.5 million copies worldwide, making it one of the best-selling games of all time. The game has been ported to many different platforms, including Windows, Xbox, Mac OS X, Android, iOS, and Fire OS. The game also received an enhanced version for its tenth anniversary in 2011, and another one for its twentieth anniversary in 2021.

-

So, is GTA 3 worth it? The answer depends on what you are looking for in a game. If you are looking for a classic game that defined the open world genre and offers a lot of fun and freedom, then GTA 3 is definitely worth it. However, if you are looking for a game that has modern graphics, gameplay mechanics, and features, then you might find GTA 3 outdated and clunky compared to newer games in the series or genre. Ultimately, GTA 3 is a game that deserves respect and appreciation for its legacy and impact on gaming history.

Here are some more paragraphs for the article:

-

GTA 3 is not only a game, but also a cultural phenomenon that has influenced many other games, movies, music, and art. The game has been referenced and parodied in various media, such as The Simpsons, Family Guy, South Park, Robot Chicken, and The Office. The game has also inspired many real-life events and controversies, such as lawsuits, crimes, protests, and bans. For example, in 2003, a teenager named Devin Moore killed three people and stole a police car in Alabama, and claimed that he was influenced by GTA 3. He was later sentenced to death.

-

GTA 3 is also a game that has sparked many debates and discussions about the role of violence, sex, morality, and ethics in video games. The game has been criticized by many groups and individuals for its depiction of violence, especially towards women, minorities, and law enforcement. The game has also been accused of promoting crime, drug use, racism, sexism, and misogyny. Some critics have argued that GTA 3 is a satire and a critique of American society and culture, while others have argued that it is a glorification and a celebration of it.

-

GTA 3 is a game that has left a lasting impression on the gaming industry and the gaming community. It is a game that has challenged the boundaries of what video games can do and be. It is a game that has given players a sense of freedom and empowerment that few games can match. It is a game that has made history and changed the world.

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Civil 3D 2015 Keygen Xforce Rar Free Download !EXCLUSIVE!.md b/spaces/1gistliPinn/ChatGPT4/Examples/Civil 3D 2015 Keygen Xforce Rar Free Download !EXCLUSIVE!.md deleted file mode 100644 index 4eb9eb4d5af3d31b4be9b9cd40180f188b460215..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Civil 3D 2015 Keygen Xforce Rar Free Download !EXCLUSIVE!.md +++ /dev/null @@ -1,90 +0,0 @@ - -

Civil 3D 2015 Keygen Xforce Rar Free Download - A Guide to Activate Autodesk Civil 3D 2015 and Other Products

-

Autodesk Civil 3D 2015 is a powerful software that allows civil engineers and designers to create, analyze, and document civil engineering projects. It offers features such as dynamic modeling, geospatial analysis, stormwater management, site grading, and more. However, to use Autodesk Civil 3D 2015 and other Autodesk products of the 2015 version, you need to have a valid product key that can activate the software and unlock all its features and options.

-

Civil 3D 2015 keygen xforce rar free download


Download ->->->-> https://imgfil.com/2uy0yW



-

One way to get a product key for Autodesk Civil 3D 2015 and other products is to purchase it from the official website or an authorized dealer. However, this can be expensive and not affordable for everyone. Another way to get a product key for Autodesk Civil 3D 2015 and other products is to use the Civil 3D 2015 keygen xforce rar free download. This is a file that contains a program called X-Force 2015 that can generate product keys for all Autodesk products of the 2015 version, including Civil 3D 2015. In this article, we will explain what is the Civil 3D 2015 keygen xforce rar free download, how to use it, and what are the benefits and risks of using it.

-

What is the Civil 3D 2015 Keygen Xforce Rar Free Download?

-

The Civil 3D 2015 keygen xforce rar free download is a file that contains a program called X-Force 2015. X-Force 2015 is a jailbreak software that can generate product keys for all Autodesk products of the 2015 version, such as Civil 3D 2015, AutoCAD 2015, Revit 2015, etc. The product key is required when you install an Autodesk product as a point product or from a product suite. It allows you to activate the product and use all its features and options without any limitations or restrictions.

-

The Civil 3D 2015 keygen xforce rar free download is available on various websites that provide cracks, patches, mods, and tools for different software and games. You can download it for free from these websites and use it to activate your Autodesk products of the 2015 version.

-

How to Use the Civil 3D 2015 Keygen Xforce Rar Free Download?

-

To use the Civil 3D 2015 keygen xforce rar free download, you need to follow these steps:

-
    -
  1. Download the Civil 3D 2015 keygen xforce rar free download from a reliable source.
  2. -
  3. Extract the rar file using a program like WinRAR or 7-Zip.
  4. -
  5. Run the X-Force 2015 program as administrator.
  6. -
  7. Select your Autodesk product from the list and click on Generate.
  8. -
  9. Copy the generated product key and paste it in the installation window of your Autodesk product.
  10. -
  11. Click on Next and follow the instructions to complete the installation.
  12. -
  13. Restart your Autodesk product and enjoy its full features and options.
  14. -
-

What are the Benefits and Risks of Using the Civil 3D 2015 Keygen Xforce Rar Free Download?

-

The Civil 3D 2015 keygen xforce rar free download has some benefits and risks for users who want to activate their Autodesk products of the 2015 version. Some of these benefits and risks are:

-

Benefits

- -

Risks

- -

Conclusion

-

The Civil 3D 2015 keygen xforce rar free download is a file that can help you activate your Autodesk products of the 2015 version, such as Civil 3D 2015. It can generate product keys for all Autodesk products of the 2015 version and allow you to use them without any limitations or restrictions. However, it also has some risks and challenges that you should be aware of and prepared for. The Civil 3D 2015 keygen xforce rar free download is not a perfect solution for activating your Autodesk products of the

-

-

-

If you are interested in using the Civil 3D 2015 keygen xforce rar free download, you can download it from the links below. However, we recommend that you use it at your own risk and discretion. We are not responsible for any damages or losses that may occur from using the Civil 3D 2015 keygen xforce rar free download.

-

Download Links for Civil 3D 2015 Keygen Xforce Rar Free Download

-

Here are some of the websites that offer the Civil 3D 2015 keygen xforce rar free download:

- -

Final Words

-

We hope that this article has helped you understand what is the Civil 3D 2015 keygen xforce rar free download, how to use it, and what are the benefits and risks of using it. If you have any questions or comments, please feel free to leave them below. Thank you for reading and have a great day!

-

-

If you are interested in using the Civil 3D 2015 keygen xforce rar free download, you can download it from the links below. However, we recommend that you use it at your own risk and discretion. We are not responsible for any damages or losses that may occur from using the Civil 3D 2015 keygen xforce rar free download.

-

Download Links for Civil 3D 2015 Keygen Xforce Rar Free Download

-

Here are some of the websites that offer the Civil 3D 2015 keygen xforce rar free download:

- -

Final Words

-

We hope that this article has helped you understand what is the Civil 3D 2015 keygen xforce rar free download, how to use it, and what are the benefits and risks of using it. If you have any questions or comments, please feel free to leave them below. Thank you for reading and have a great day!

-

How to Use Autodesk Civil 3D 2015 After Activation

-

After you have activated your Autodesk Civil 3D 2015 using the Civil 3D 2015 keygen xforce rar free download, you can start using the software and enjoy its features and options. Here are some of the things you can do with Autodesk Civil 3D 2015:

- -

Tips and Tricks for Using Autodesk Civil 3D 2015

-

To make the most out of your Autodesk Civil 3D 2015, here are some tips and tricks that can help you improve your skills and efficiency:

- -

Conclusion

-

In this article, we have discussed what is the Civil 3D 2015 keygen xforce rar free download, how to use it, what are the benefits and risks of using it, how to use Autodesk Civil 3D 2015 after activation, and some tips and tricks for using Autodesk Civil 3D 2015. We hope that this article has been informative and helpful for you. If you have any feedback or suggestions, please let us know in the comments section below. Thank you for reading and have a wonderful day!

-

Conclusion

-

In this article, we have discussed what is the Civil 3D 2015 keygen xforce rar free download, how to use it, what are the benefits and risks of using it, how to use Autodesk Civil 3D 2015 after activation, and some tips and tricks for using Autodesk Civil 3D 2015. We hope that this article has been informative and helpful for you. If you have any feedback or suggestions, please let us know in the comments section below.

-

If you are interested in using the Civil 3D 2015 keygen xforce rar free download, you can download it from the links we have provided in this article. However, we recommend that you use it at your own risk and discretion. We are not responsible for any damages or losses that may occur from using the Civil 3D 2015 keygen xforce rar free download.

-

If you want to learn more about Autodesk Civil 3D 2015 and other Autodesk products of the 2015 version, you can visit the official website or check out some of the online tutorials and courses that are available on various platforms. You can also join some of the online communities and forums that are dedicated to Autodesk Civil 3D 2015 and other Autodesk products of the 2015 version. You can share your experiences, ask questions, get answers, and learn from other civil engineers and designers who use Autodesk Civil 3D 2015 and other Autodesk products of the 2015 version.

-

Thank you for reading and have a wonderful day!

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Db Bot 1.3a Crack [PATCHED] Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Db Bot 1.3a Crack [PATCHED] Download.md deleted file mode 100644 index ba932094dbb7ecca080794af82d338219e7475c5..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Db Bot 1.3a Crack [PATCHED] Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

Db Bot 1.3a Crack Download


DOWNLOAD ✦✦✦ https://imgfil.com/2uy0s5



-
- 1fdad05405
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Download [BEST] Ta Ra Rum Pum Mp4 Download [BEST].md b/spaces/1gistliPinn/ChatGPT4/Examples/Download [BEST] Ta Ra Rum Pum Mp4 Download [BEST].md deleted file mode 100644 index 2d137a6e9500d15d392518244d826cae7f8ddfdc..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Download [BEST] Ta Ra Rum Pum Mp4 Download [BEST].md +++ /dev/null @@ -1,6 +0,0 @@ -

Download Ta Ra Rum Pum Mp4 Download


Download ○○○ https://imgfil.com/2uxWZr



-
- 1fdad05405
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Engineering Metrology And Measurements By Vijayaraghavan Pdf Free Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Engineering Metrology And Measurements By Vijayaraghavan Pdf Free Download.md deleted file mode 100644 index 687bd233e42f5c80b62420436adccfcd739f86dc..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Engineering Metrology And Measurements By Vijayaraghavan Pdf Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

Engineering Metrology And Measurements By Vijayaraghavan Pdf Free Download


DOWNLOAD ✔✔✔ https://imgfil.com/2uy05T



-
-April 25th, 2018 - Engineering Metrology and Measurements pdf Download as ... and measurement vijayaraghavan pdf FREE PDF DOWNLOAD NOW Source 2 ... 1fdad05405
-
-
-

diff --git a/spaces/1line/AutoGPT/tests/unit/test_commands.py b/spaces/1line/AutoGPT/tests/unit/test_commands.py deleted file mode 100644 index ecbac9b73bd9ad872931d77e144dd853b3d8ef64..0000000000000000000000000000000000000000 --- a/spaces/1line/AutoGPT/tests/unit/test_commands.py +++ /dev/null @@ -1,22 +0,0 @@ -"""Unit tests for the commands module""" -from unittest.mock import MagicMock, patch - -import pytest - -import autogpt.agent.agent_manager as agent_manager -from autogpt.app import execute_command, list_agents, start_agent - - -@pytest.mark.integration_test -def test_make_agent() -> None: - """Test the make_agent command""" - with patch("openai.ChatCompletion.create") as mock: - obj = MagicMock() - obj.response.choices[0].messages[0].content = "Test message" - mock.return_value = obj - start_agent("Test Agent", "chat", "Hello, how are you?", "gpt2") - agents = list_agents() - assert "List of agents:\n0: chat" == agents - start_agent("Test Agent 2", "write", "Hello, how are you?", "gpt2") - agents = list_agents() - assert "List of agents:\n0: chat\n1: write" == agents diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Blue Orchid Mod Apk and Experience a Gripping Story.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Blue Orchid Mod Apk and Experience a Gripping Story.md deleted file mode 100644 index bcca4bb7a67139fa0a0c9c668359d81be2e4994c..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Blue Orchid Mod Apk and Experience a Gripping Story.md +++ /dev/null @@ -1,139 +0,0 @@ - -

Blue Orchid Mod APK: A Guide for Interactive Story Lovers

-

If you are a fan of interactive stories, you might have heard of Blue Orchid, a game that lets you create your own character and live your own adventure. But did you know that there is a modded version of the game that gives you unlimited gems, coins, and choices? In this article, we will tell you everything you need to know about Blue Orchid Mod APK, including what it is, why you should download it, how to play it, and what are its pros and cons. Let's get started!

-

What is Blue Orchid?

-

A brief introduction to the game

-

Blue Orchid is an interactive story game developed by Elia Games. It is available for Android devices and can be downloaded from Google Play Store. The game is set in a fictional city called Blue Orchid, where you can choose from different genres of stories, such as romance, drama, mystery, fantasy, and more. You can customize your character's appearance, name, personality, and preferences. You can also interact with other characters, make decisions that affect the outcome of the story, and enjoy various mini-games and activities.

-

blue orchid mod apk


Download Zip »»» https://urlin.us/2uSSob



-

The main features of the game

-

Some of the features that make Blue Orchid stand out from other interactive story games are:

- -

Why download Blue Orchid Mod APK?

-

The benefits of using the modded version

-

While Blue Orchid is a free-to-play game, it also has some in-app purchases that require real money. For example, you need gems to unlock premium choices and outfits, coins to buy gifts and items, and tickets to access new chapters. These resources are limited and can run out quickly if you play frequently. This can limit your options and enjoyment of the game.

-

That's why some players prefer to use Blue Orchid Mod APK, which is a modified version of the game that gives you unlimited gems, coins, and tickets. With this modded version, you can enjoy the following benefits:

- -

How to download and install Blue Orchid Mod APK

-

If you want to try Blue Orchid Mod APK, you need to follow these steps:

-
    -
  1. Uninstall the original version of Blue Orchid from your device if you have it installed.
  2. -
  3. Download Blue Orchid Mod APK from a reliable source such as [PlayMods](^1^).
  4. -
  5. Enable unknown sources on your device settings to allow the installation of third-party apps.
  6. -
  7. Locate the downloaded file on your device storage and tap on it to start the installation process.
  8. -
  9. Follow the instructions on the screen to complete the installation.Launch the game and enjoy the modded features.
  10. -
-

Note: You may need to grant some permissions to the app to run properly. Also, make sure to download the modded version from a trusted source to avoid any malware or viruses.

-

How to play Blue Orchid: Interactive Story

-

The basic gameplay mechanics

-

Playing Blue Orchid is simple and intuitive. Here are the basic steps you need to follow:

-
    -
  1. Choose a story genre that interests you from the main menu. You can browse through different categories such as romance, drama, mystery, fantasy, and more.
  2. -
  3. Create your character by selecting their gender, appearance, name, and personality. You can also change their outfit and accessories later in the game.
  4. -
  5. Start the story and read the dialogue and narration. You can tap on the screen to proceed or swipe left or right to go back or forward.
  6. -
  7. Make choices that affect the plot and your relationships with other characters. Some choices are free, while others require gems or coins. You can also use tickets to unlock new chapters.
  8. -
  9. Enjoy the mini-games and activities that are part of the story. For example, you can play match-3 puzzles, trivia quizzes, dress-up games, and more.
  10. -
  11. Earn rewards such as gems, coins, tickets, and items by completing achievements, watching ads, or spinning the wheel.
  12. -
-

The tips and tricks for a better experience

-

If you want to have more fun and success in Blue Orchid, here are some tips and tricks you can use:

-

blue orchid interactive story mod apk
-blue orchid mod apk unlimited diamonds
-blue orchid mod apk latest version
-blue orchid mod apk download for android
-blue orchid mod apk free shopping
-blue orchid mod apk 1.0.1
-blue orchid mod apk choices
-blue orchid mod apk offline
-blue orchid mod apk no ads
-blue orchid mod apk unlocked everything
-blue orchid mod apk android 1
-blue orchid mod apk revdl
-blue orchid mod apk happymod
-blue orchid mod apk rexdl
-blue orchid mod apk apkpure
-blue orchid mod apk 2023
-blue orchid mod apk update
-blue orchid mod apk premium
-blue orchid mod apk vip
-blue orchid mod apk pro
-blue orchid mod apk full version
-blue orchid mod apk hack
-blue orchid mod apk cheat
-blue orchid mod apk cracked
-blue orchid mod apk unlimited money
-blue orchid romance game mod apk
-blue orchid love story mod apk
-blue orchid dating sim mod apk
-blue orchid visual novel mod apk
-blue orchid otome game mod apk
-download game blue orchid mod apk
-download aplikasi blue orchid mod apk
-cara download blue orchid mod apk
-link download blue orchid mod apk
-how to install blue orchid mod apk
-how to play blue orchid mod apk
-how to get blue orchid mod apk
-how to update blue orchid mod apk
-how to hack blue orchid mod apk
-how to cheat in blue orchid mod apk
-is there a blue orchid mod apk
-where can i find blue orchid mod apk
-where to download blue orchid mod apk
-what is the best site for downloading the latest version of the Blue Orchids Mod Apk?

- -

The pros and cons of Blue Orchid Mod APK

-

The advantages of the modded version

-

Using Blue Orchid Mod APK has some advantages that make it appealing for many players. Some of them are:

- -

The disadvantages of the modded version

-

However, using Blue Orchid Mod APK also has some disadvantages that you should be aware of before downloading it. Some of them are:

- -

Conclusion

-

A summary of the main points

-

In conclusion, Blue Orchid is an interactive story game that lets you create your own character and live your own adventure in a fictional city. You can choose from different genres of stories, customize your character's appearance and personality, interact with other characters, make decisions that affect the outcome of the story, and enjoy various mini-games and activities. The game is free-to-play but also has some in-app purchases that require real money. If you want to have unlimited resources such as gems, coins, and tickets, you can download Blue Orchid Mod APK, which is a modified version of the game that gives you these benefits. However, you should also be aware of the potential risks and drawbacks of using this modded version such as technical issues, banned or suspended from the game, or missing out on some of the original features or content of the game.

-

A call to action for the readers

-

Now that you know everything about Blue Orchid Mod APK, you can decide whether you want to download it or not. If you do, make sure to follow the instructions we provided and enjoy the game with unlimited resources. If you don't, you can still play the original version of Blue Orchid and have a great time with the interactive stories. Either way, we hope you have fun and share your thoughts and experiences with us in the comments section below. Happy gaming!

-

FAQs

-

Here are some of the frequently asked questions about Blue Orchid Mod APK:

-
    -
  1. Is Blue Orchid Mod APK safe to use?
  2. -

    Blue Orchid Mod APK is generally safe to use as long as you download it from a reliable source such as [PlayMods]. However, you should always be careful when installing third-party apps on your device and scan them for any malware or viruses.

    -
  3. How do I update Blue Orchid Mod APK?
  4. -

    Blue Orchid Mod APK is usually updated automatically when the original version of the game is updated. However, if you encounter any problems or errors, you can check the source where you downloaded the modded version and see if there is a newer version available. You can also follow the official social media accounts of Blue Orchid for any news or updates.

    -
  5. Can I play Blue Orchid Mod APK offline?
  6. -

    No, you cannot play Blue Orchid Mod APK offline. You need an internet connection to access the game and its features. However, you can play some of the mini-games and activities offline once you have downloaded them.

    -
  7. Can I transfer my progress from Blue Orchid to Blue Orchid Mod APK or vice versa?
  8. -

    No, you cannot transfer your progress from Blue Orchid to Blue Orchid Mod APK or vice versa. The two versions of the game are not compatible and have different data and files. If you want to switch from one version to another, you will have to start from scratch.

    -
  9. Can I play Blue Orchid Mod APK with my friends?
  10. -

    Yes, you can play Blue Orchid Mod APK with your friends. You can connect your game account to your Facebook account and invite your friends to join you in the game. You can also chat with them, send them gifts, and compete with them in the leaderboards.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/APK5-30 .md b/spaces/1phancelerku/anime-remove-background/APK5-30 .md deleted file mode 100644 index f94e9385b69aaee76083bf2ff5dfb074e1111ac4..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/APK5-30 .md +++ /dev/null @@ -1,129 +0,0 @@ - -

What is APK5-30 and Why You Need It

-

If you are looking for a reliable, efficient, and cost-effective axial fan for your cooling and ventilation needs, you might want to consider APK5-30. This is a product from Teral, a leading manufacturer of pumps and fans in Japan. In this article, we will explain what APK5-30 is, what are its features and benefits, how to use it, and how it compares with other axial fans in the market.

-

Introduction

-

Cooling and ventilation are essential for many industrial applications, such as machinery, equipment, exhaust, and air conditioning. However, not all fans are created equal. Some fans may not be able to deliver the required airflow and pressure, some may consume too much energy and generate too much noise, and some may not be durable or easy to install and maintain. That's why you need a fan that can meet your specific needs and expectations.

-

apk5-30


Download →→→ https://jinyurl.com/2uNPSM



-

What is APK5-30?

-

APK5-30 is a type of axial fan that uses an aluminum impeller and a belt drive system to create a high-efficiency airflow. It has a circular shape that can be directly mounted on a duct or suspended from a ceiling. It can handle air temperatures from 0 to 40 degrees Celsius and has a frequency of 50Hz or 60Hz depending on the region. It has a size of 300mm, an output of 0.4kW, a voltage of 200V, and a speed of 4P.

-

What are the features and benefits of APK5-30?

-

APK5-30 has many features and benefits that make it a superior choice for cooling and ventilation purposes. Here are some of them:

- -

How to use APK5-30 for your cooling and ventilation needs

-

Now that you know what APK5-30 is and what it can do for you, let's see how you can use it for your cooling and ventilation needs. Here are some tips on how to install, operate, and maintain APK5-30.

-

How to install APK5-30

-

To install APK5-30, you need to follow these steps:

-
    -
  1. Select a suitable location for the fan that has enough space, ventilation, and accessibility.
  2. -
  3. Prepare the duct or ceiling where the fan will be mounted or suspended.
  4. -
  5. Connect the fan to the power supply according to the wiring diagram provided by the manufacturer.
  6. -
  7. Secure the fan with bolts or nuts on the duct or ceiling.
  8. -
  9. Check the rotation direction of the impeller by turning on the power briefly.
  10. -
  11. If the rotation direction is incorrect, reverse the wiring connection.
  12. -
-

How to operate APK5-30

-

To operate To operate APK5-30, you need to follow these steps:

    -
  1. Turn on the power switch and adjust the speed controller if needed.
  2. -
  3. Monitor the fan operation and check for any abnormal sounds, vibrations, or smells.
  4. -
  5. If the fan stops working or malfunctions, turn off the power immediately and contact the manufacturer or a qualified technician.
  6. -
-

How to maintain APK5-30

-

To maintain APK5-30, you need to follow these steps:

-
    -
  1. Turn off the power and disconnect the fan from the power supply before cleaning or inspecting.
  2. -
  3. Clean the fan regularly with a soft cloth or a brush to remove any dust or dirt.
  4. -
  5. Check the fan for any signs of wear, damage, or corrosion and replace any defective parts as soon as possible.
  6. -
  7. Lubricate the bearings and belts periodically with the recommended oil or grease.
  8. -
  9. Store the fan in a dry and cool place when not in use.
  10. -
-

Comparison of APK5-30 with other axial fans

-

Now that you know how to use APK5-30, let's see how it compares with other axial fans in the market. Here are some aspects that you can use to evaluate different axial fans:

-

How APK5-30 differs from other axial fans

-

APK5-30 differs from other axial fans in several ways, such as:

-

APK5-30 axial fan price
-APK5-30 axial fan specifications
-APK5-30 axial fan installation manual
-APK5-30 axial fan performance curve
-APK5-30 axial fan noise level
-APK5-30 axial fan maintenance
-APK5-30 axial fan replacement parts
-APK5-30 axial fan reviews
-APK5-30 axial fan dimensions
-APK5-30 axial fan weight
-APK5-30 axial fan power consumption
-APK5-30 axial fan airflow rate
-APK5-30 axial fan static pressure
-APK5-30 axial fan speed
-APK5-30 axial fan efficiency
-APK5-30 axial fan vs centrifugal fan
-APK5-30 axial fan applications
-APK5-30 axial fan advantages and disadvantages
-APK5-30 axial fan suppliers
-APK5-30 axial fan distributors
-APK5-30 axial fan online purchase
-APK5-30 axial fan warranty
-APK5-30 axial fan troubleshooting
-APK5-30 axial fan vibration analysis
-APK5-30 axial fan blade design
-APK5-30 axial fan motor type
-APK5-30 axial fan belt tension
-APK5-30 axial fan bearing lubrication
-APK5-30 axial fan impeller material
-APK5-30 axial fan casing material

- -

How APK5-30 performs better than other axial fans

-

APK5-30 performs better than other axial fans in several ways, such as:

- -

How APK5-30 saves energy and costs than other axial fans

-

APK5-30 saves energy and costs than other axial fans in several ways, such as:

- -

Conclusion

-

Summary of the main points

-

In conclusion, APK5-30 is a type of axial fan that uses an aluminum impeller and a belt drive system to create a high-efficiency airflow. It has many features and benefits that make it a superior choice for cooling and ventilation purposes. It is easy to install, operate, and maintain, and it performs better than other axial fans in terms of airflow, pressure, noise, vibration, service life, and maintenance cost. It also saves energy and costs by using a top-runner efficiency motor (IE3 equivalent) that reduces electricity consumption and carbon emissions. (Except for 0.2 to 0.4kW models)

-

Call to action

-

If you are interested in purchasing APK5-30 or learning more about it, please visit our website or contact us today. We will be happy to assist you with any questions or inquiries you may have. Don't miss this opportunity to get your hands on this amazing product that will improve your cooling and ventilation needs.

-

Frequently Asked Questions

-

What is the warranty period for APK5-30?

-

The warranty period for APK5-30 is one year from the date of purchase. If you encounter any problems with the product during this period, please contact us for repair or replacement.

-

What are the dimensions and weight of APK5-30?

-

The dimensions of APK5-30 are 300mm x 300mm x 300mm (L x W x H) and the weight is 9kg.

-

What are the applications of What are the applications of APK5-30?

-

APK5-30 can be used for various cooling and ventilation applications, such as:

- -

How can I order APK5-30 online?

-

You can order APK5-30 online by visiting our website and filling out the order form. You will need to provide your name, address, phone number, email, and payment method. We will confirm your order and ship the product to you as soon as possible.

-

What are the safety precautions for using APK5-30?

-

When using APK5-30, you should follow these safety precautions:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Bubble Shooter Enjoy the Original Bubble Pop Game on Your iOS Device.md b/spaces/1phancelerku/anime-remove-background/Bubble Shooter Enjoy the Original Bubble Pop Game on Your iOS Device.md deleted file mode 100644 index 584b2e976c4a31ab3e9229c6e3fa81699d23d168..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Bubble Shooter Enjoy the Original Bubble Pop Game on Your iOS Device.md +++ /dev/null @@ -1,154 +0,0 @@ - -

Bubble Shooter for iPhone Free Download: How to Play the Classic and Addictive Game on Your iOS Device

-

If you are looking for a fun and relaxing game to play on your iPhone, you might want to try Bubble Shooter. Bubble Shooter is a classic and addictive game that has been around for decades and is still popular among millions of players worldwide. In this article, we will tell you everything you need to know about Bubble Shooter, including what it is, how to download it for free, and how to play it on your iOS device. Let's get started!

-

What is Bubble Shooter?

-

Bubble Shooter is a puzzle game that involves shooting bubbles of the same color to make them pop and clear the board. The game is simple to learn but challenging to master, as you need to aim carefully and plan your moves ahead. The game has many variations and versions, but the basic concept remains the same: match 3 or more bubbles of the same color to burst them and score points.

-

bubble shooter for iphone free download


Downloadhttps://jinyurl.com/2uNJP2



-

The history of Bubble Shooter

-

Bubble Shooter was originally developed by a company called Taito in 1994 as an arcade game called Puzzle Bobble. The game was a spin-off of the popular platformer game Bubble Bobble, which featured two cute dragons named Bub and Bob. Puzzle Bobble was later ported to various home consoles and computers, and became a huge hit worldwide. The game spawned several sequels and clones, and inspired many other bubble shooting games over the years.

-

The gameplay of Bubble Shooter

-

The gameplay of Bubble Shooter is very simple: you have a cannon at the bottom of the screen that shoots bubbles of different colors. You can aim the cannon by moving your finger or mouse cursor on the screen, and tap or click to fire a bubble. Your goal is to match 3 or more bubbles of the same color to make them pop and clear them from the board. If you clear all the bubbles, you win the level and move on to the next one. If the bubbles reach the bottom of the screen, you lose the game and have to start over.

-

The benefits of playing Bubble Shooter

-

Bubble Shooter is not only a fun and entertaining game, but also a beneficial one. Playing Bubble Shooter can help you improve your skills in various ways, such as:

- -

Besides, playing Bubble Shooter can also make you happy and relaxed, as popping bubbles can release endorphins in your brain that make you feel good.

-

How to download Bubble Shooter for iPhone for free?

-

If you want to play Bubble Shooter on your iPhone, you have plenty of options to choose from. There are many free apps that offer different versions and variations of Bubble Shooter on the App Store. Here are some of the best ones that we recommend:

-

The best Bubble Shooter apps on the App Store

-

Bubble Shooter - Pop Bubbles

-

This app is one of the most popular and highly rated Bubble Shooter games on the App Store. It offers a classic and addictive gameplay with thousands of fun levels, amazing graphics and sounds, and various challenges and rewards. You can also play with your friends and family online and compete for the highest score. The app is free to download and play, but it contains ads and in-app purchases. You can download it from here: [Bubble Shooter - Pop Bubbles].

-

free bubble shooter games download for iphone
-bubble shooter app for iphone free download
-bubble shooter classic for iphone free download
-bubble shooter puzzle for iphone free download
-bubble shooter adventure for iphone free download
-bubble shooter legend for iphone free download
-bubble shooter deluxe for iphone free download
-bubble shooter blast for iphone free download
-bubble shooter pop for iphone free download
-bubble shooter fun for iphone free download
-bubble shooter saga for iphone free download
-bubble shooter mania for iphone free download
-bubble shooter frenzy for iphone free download
-bubble shooter magic for iphone free download
-bubble shooter galaxy for iphone free download
-bubble shooter candy for iphone free download
-bubble shooter fruit for iphone free download
-bubble shooter animal for iphone free download
-bubble shooter dragon for iphone free download
-bubble shooter unicorn for iphone free download
-bubble shooter rainbow for iphone free download
-bubble shooter garden for iphone free download
-bubble shooter farm for iphone free download
-bubble shooter jungle for iphone free download
-bubble shooter forest for iphone free download
-bubble shooter ocean for iphone free download
-bubble shooter beach for iphone free download
-bubble shooter island for iphone free download
-bubble shooter pirate for iphone free download
-bubble shooter treasure for iphone free download
-bubble shooter gold for iphone free download
-bubble shooter diamond for iphone free download
-bubble shooter jewel for iphone free download
-bubble shooter crystal for iphone free download
-bubble shooter star for iphone free download
-bubble shooter space for iphone free download
-bubble shooter planet for iphone free download
-bubble shooter solar for iphone free download
-bubble shooter lunar for iphone free download
-bubble shooter halloween for iphone free download
-bubble shooter christmas for iphone free download
-bubble shooter winter for iphone free download
-bubble shooter spring for iphone free download
-bubble shooter summer for iphone free download
-bubble shooter autumn for iphone free download
-best bubble shooter game for iphone free download
-new bubble shooter game for iphone free download
-top bubble shooter game for iphone free download
-cool bubble shooter game for iphone free download

-

Bubble Shooter - Addictive!

-

This app is another great option for Bubble Shooter fans. It features a smooth and easy gameplay with over 3000 exciting levels, stunning graphics and effects, and a relaxing soundtrack. You can also customize your bubble shooter with different skins and themes, and enjoy daily bonuses and gifts. The app is free to download and play, but it contains ads and in-app purchases. You can download it from here: [Bubble Shooter - Addictive!].

-

Bobble Shooter

-

This app is a unique and innovative take on the Bubble Shooter genre. It combines the classic bubble popping gameplay with a physics-based puzzle element. You have to shoot bobbles of different shapes and sizes to create clusters of the same color and make them explode. The game has hundreds of challenging levels, colorful graphics and animations, and a catchy music. The app is free to download and play, but it contains ads and in-app purchases. You can download it from here: [Bobble Shooter].

-

How to install and launch Bubble Shooter on your iPhone

-

Installing and launching Bubble Shooter on your iPhone is very easy. Just follow these simple steps:

-
    -
  1. Open the App Store on your iPhone and search for the Bubble Shooter app that you want to download.
  2. -
  3. Tap on the app icon and then tap on the Get button to start the download process.
  4. -
  5. Wait for the app to finish downloading and then tap on the Open button to launch it.
  6. -
  7. Alternatively, you can also find the app icon on your home screen and tap on it to launch it.
  8. -
-

How to update and delete Bubble Shooter on your iPhone

-

Updating and deleting Bubble Shooter on your iPhone is also very simple. Just follow these simple steps:

-
    -
  1. To update Bubble Shooter, open the App Store on your iPhone and tap on the Updates tab at the bottom.
  2. -
  3. Find the Bubble Shooter app that you want to update and tap on the Update button next to it.
  4. -
  5. Wait for the app to finish updating and then launch it as usual.
  6. -
  7. To delete Bubble Shooter, press and hold the app icon on your home screen until it starts to wiggle.
  8. -
  9. Tap on the X button on the top left corner of the app icon and then tap on Delete to confirm.
  10. -
-

How to play Bubble Shooter on your iPhone?

-

Playing Bubble Shooter on your iPhone is very fun and easy. Here are some tips and tricks that will help you enjoy the game more:

-

The basic rules and tips of Bubble Shooter

-

The basic rules of Bubble Shooter are as follows:

- -

Some tips that will help you improve your performance are:

- -

The different game modes and levels of Bubble Shooter

-

Bubble Shooter offers a variety of game modes and levels that will keep you entertained for hours. Some of them are:

- -

The features and settings of Bubble Shooter

-

Bubble Shooter also has many features and settings that will enhance your gaming experience. Some of them are:

- -

Conclusion

-

Bubble Shooter is a classic and addictive game that you can play on your iPhone for free. It offers a simple but challenging gameplay with thousands of fun levels, amazing graphics and sounds, and various game modes and features. It also helps you improve your skills, reduce your stress, and have fun with your friends. If you are looking for a game that will keep you entertained for hours, download Bubble Shooter today and enjoy popping bubbles!

-

FAQs

-

Here are some frequently asked questions about Bubble Shooter:

-
    -
  1. How do I get more coins and gems in Bubble Shooter?
  2. -

    You can get more coins and gems in Bubble Shooter by popping bubbles, completing levels, achieving goals, watching ads, spinning the wheel, opening chests, collecting daily bonuses, joining events, buying them with real money, etc.

    -
  3. How do I use power-ups and boosters in Bubble Shooter?
  4. -

    You can use power-ups and boosters in Bubble Shooter by tapping on them before or during the game. Power-ups are special bubbles that have different effects, such as bombs, rainbows, stars, etc. Boosters are items that help you in various ways, such as extra moves, fireballs, magnets, etc.

    -
  5. How do I unlock new levels in Bubble Shooter?
  6. -

    You can unlock new levels in Bubble Shooter by completing the previous levels or by paying coins or gems. You can also unlock new levels by joining events or teams that offer exclusive levels.

    -
  7. How do I reset my progress in Bubble Shooter?
  8. -

    You can reset your progress in Bubble Shooter by deleting the app from your iPhone and reinstalling it. However, this will also erase all your coins, gems, power-ups, boosters, lives, etc. If you want to keep them, you can connect your Facebook account to Bubble Shooter and sync your progress across different devices.

    -
  9. How do I contact the support team of Bubble Shooter?
  10. -

    You can contact the support team of Bubble Shooter by tapping on the settings icon on the main screen and then tapping on the help button. You can also email them at support@bubbleshooter.com or visit their website at www.bubbleshooter.com.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Crafting and Building 1.18 APK A Free Game with Amazing Graphics and Multiplayer Mode.md b/spaces/1phancelerku/anime-remove-background/Crafting and Building 1.18 APK A Free Game with Amazing Graphics and Multiplayer Mode.md deleted file mode 100644 index a72384daf227db3aa90dac1be1aabb55fb0587a6..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Crafting and Building 1.18 APK A Free Game with Amazing Graphics and Multiplayer Mode.md +++ /dev/null @@ -1,121 +0,0 @@ -
-

Crafting and Building 1.18 APK: A Free Game for Creative Minds

-

Do you like building games? Do you want to create your own world with your own rules? If yes, then you should try crafting and building 1.18 apk, a new free game that lets you unleash your imagination and show your skills. Crafting and building 1.18 apk is a sandbox game that allows you to build anything you want, from houses and castles to farms and cities. You can also play with your friends online, explore their creations, and have fun together. Crafting and building 1.18 apk is a game for the whole family, suitable for kids, boys, girls, and adults.

-

Features of Crafting and Building 1.18 APK

-

Crafting and building 1.18 apk has many features that make it an enjoyable and addictive game. Here are some of them:

-

crafting and building 1.18 apk


Download File ★★★★★ https://jinyurl.com/2uNLwa



- -

Tips and Tricks for Crafting and Building 1.18 APK

-

If you want to master crafting and building 1.18 apk, here are some tips and tricks that can help you:

- -

Reviews of Crafting and Building 1.18 APK

-

Crafting and building 1.18 apk has received many positive reviews from users who have played the game. Here are some of them:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
UserRatingComment
Amy5 starsI love this game! It's so fun and creative. I can build anything I want and play with my friends online. It's like Minecraft but better.
Jack4 starsThis game is awesome, but it has some bugs and glitches. Sometimes the game crashes or freezes, and sometimes the blocks disappear or change color. Please fix these issues.
Lisa3 starsThis game is good, but it needs more content and features. I wish there were more block types, more animals, more items, more modes, and more customization options. It gets boring after a while.
Tom2 starsThis game is okay, but it's too similar to other games. It's like a copy of Minecraft or Roblox. It doesn't have anything original or unique. It's just another building game.
Anna1 starThis game is terrible. It's full of ads and pop-ups that ruin the gameplay. It's also very laggy and slow. It takes forever to load and connect to the servers. It's a waste of time and space.
-

Conclusion: Download Crafting and Building 1.18 APK Now!

-

Crafting and building 1.18 apk is a free game that lets you create your own world with your own rules. You can build anything you want, from houses and castles to farms and cities. You can also play with your friends online, explore their creations, and have fun together. Crafting and building 1.18 apk is a game for the whole family, suitable for kids, boys, girls, and adults.

-

crafting and building 1.18 apk download free
-crafting and building 1.18 apk mod unlimited money
-crafting and building 1.18 apk latest version
-crafting and building 1.18 apk for android
-crafting and building 1.18 apk offline
-crafting and building 1.18 apk no ads
-crafting and building 1.18 apk update
-crafting and building 1.18 apk hack
-crafting and building 1.18 apk full version
-crafting and building 1.18 apk premium
-crafting and building 1.18 apk gameplay
-crafting and building 1.18 apk review
-crafting and building 1.18 apk features
-crafting and building 1.18 apk tips and tricks
-crafting and building 1.18 apk cheats
-crafting and building 1.18 apk guide
-crafting and building 1.18 apk tutorial
-crafting and building 1.18 apk best settings
-crafting and building 1.18 apk how to play
-crafting and building 1.18 apk requirements
-crafting and building 1.18 apk size
-crafting and building 1.18 apk screenshots
-crafting and building 1.18 apk video
-crafting and building 1.18 apk online multiplayer
-crafting and building 1.18 apk new features
-crafting and building 1.18 apk bugs fixes
-crafting and building 1.18 apk installation
-crafting and building 1.18 apk alternatives
-crafting and building 1.18 apk similar games
-crafting and building 1.18 apk comparison
-crafting and building 1.18 apk pros and cons
-crafting and building 1.18 apk ratings
-crafting and building 1.18 apk feedbacks
-crafting and building 1.18 apk comments
-crafting and building 1.18 apk questions and answers
-crafting and building 1.18 apk support
-crafting and building 1.18 apk developer contact
-crafting and building 1.18 apk official website
-crafting and building 1.18 apk social media links
-crafting and building 1.18 apk news and updates
-crafting and building 1.18 apk release date
-crafting and building 1.18 apk changelog
-crafting and building 1.18 apk download link
-crafting and building 1.18 apk mirror link
-crafting and building 1.18 apk direct link
-crafting and building 1.18 apk file information
-crafting and building 1.18 apk virus scan report
-crafting and building 1.18 apk safe to download

-

If you are looking for a game that will challenge your creativity and imagination, then you should download crafting and building 1.18 apk now! You will not regret it!

-

FAQs: Frequently Asked Questions About Crafting and Building 1.18 APK

-

Here are some of the most common questions and answers about crafting and building 1.18 apk:

-

Q: How can I download crafting and building 1.18 apk?

-

A: You can download crafting and building 1.18 apk from the Google Play Store or from other websites that offer apk files. However, be careful when downloading from unknown sources, as they may contain viruses or malware.

-

Q: How can I update crafting and building 1.18 apk?

-

A: You can update crafting and building 1.18 apk from the Google Play Store or from the app itself. The app will notify you when there is a new version available and ask you to update it.

-

Q: How can I play crafting and building 1.18 apk offline?

-

A: You can play crafting and building 1.18 apk offline by choosing the single-player mode or the creative mode. You will not be able to access the multiplayer mode or the survival mode without an internet connection.

-

Q: How can I play crafting and building 1.18 apk with my friends?

-

A: You can play crafting and building 1.18 apk with your friends by choosing the multiplayer mode or the survival mode. You will need an internet connection and a valid account to join or create a server.

-

Q: How can I contact the developers of crafting and building 1.18 apk?

-

A: You can contact the developers of crafting and building 1.18 apk by sending them an email at genere@gmail.com or by leaving them a feedback on the Google Play Store or on their social media pages.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download MuksOS AI Launcher 2.0 Mod APK for Android - Latest Version with Voice Gesture and Text Control.md b/spaces/1phancelerku/anime-remove-background/Download MuksOS AI Launcher 2.0 Mod APK for Android - Latest Version with Voice Gesture and Text Control.md deleted file mode 100644 index 1e86d3427be2261fc902ec063e266ed91a00d0a4..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download MuksOS AI Launcher 2.0 Mod APK for Android - Latest Version with Voice Gesture and Text Control.md +++ /dev/null @@ -1,108 +0,0 @@ - -

MuksOS AI Launcher 2.0: A Smart and Interactive Android Launcher

-

If you are looking for a new and innovative way to interact with your phone, you might want to check out MuksOS AI Launcher 2.0. This is a unique android launcher that combines the features of an app launcher, a virtual assistant, and an AI tool for your DIY automation projects. In this article, we will tell you what MuksOS AI Launcher 2.0 is, what are its features, how to download it, and answer some frequently asked questions.

-

muksos ai launcher 2.0 mod apk download


Download File 🌟 https://jinyurl.com/2uNMrY



-

What is MuksOS AI Launcher 2.0?

-

MuksOS AI Launcher 2.0 is an android app developed by Dr. Mukesh Bangar, a computer engineer and researcher in artificial intelligence. It is designed to make your phone smarter and more responsive by using voice, gestures, or text commands. You can use MuksOS AI Launcher 2.0 to open apps, make calls, search the web, set alarms, reminders, and more. You can also use it as a virtual assistant that can assist you anytime, anywhere with its cool and unique features like JARVIS has in Iron Man movie. And if you are into DIY automation projects, you can use MuksOS AI Launcher 2.0 as an easy AI tool to create amazing things using object recognition and smart connect features.

-

Features of MuksOS AI Launcher 2.0

-

MuksOS AI Launcher 2.0 has many features that make it stand out from other android launchers. Here are some of them:

-

Teachable

-

MuksOS AI Launcher 2.0 is not just a passive launcher that does what you say. It is also a teachable launcher that learns from you and adapts to your preferences. You can teach it voice commands, object recognition, and actions that suit your needs.

-

Fast and smooth

-

MuksOS AI Launcher 2.0 is designed to be fast and smooth, so you can get more done in less time. It has voice access that makes it faster than any other launcher and saves time. You can also use gestures or text commands if you prefer.

-

Multiple voice options

-

MuksOS AI Launcher 2.0 has six different voice options that you can choose from, depending on your mood and preference. You can switch between male and female voices, as well as different accents and languages.

-

100 % privacy

-

MuksOS AI Launcher 2.0 respects your privacy and does not store your personal data on cloud servers. All your data is stored locally on your device and encrypted for security.

-

muksos ai launcher 2.0 apk free download
-muksos ai launcher 2.0 latest version
-muksos ai launcher 2.0 android app
-muksos ai launcher 2.0 for pc
-muksos ai launcher 2.0 features
-muksos ai launcher 2.0 review
-muksos ai launcher 2.0 offline mode
-muksos ai launcher 2.0 voice access
-muksos ai launcher 2.0 smart connect
-muksos ai launcher 2.0 vision ability
-muksos ai launcher 2.0 write on home screen
-muksos ai launcher 2.0 speech reminders and alarm
-muksos ai launcher 2.0 dark and light theme
-muksos ai launcher 2.0 hide apps
-muksos ai launcher 2.0 power saver
-muksos ai launcher 2.0 teachable commands
-muksos ai launcher 2.0 object recognition
-muksos ai launcher 2.0 diy automation tool
-muksos ai launcher 2.0 virtual assistant
-muksos ai launcher 2.0 neon glow icons theme
-muksos ai launcher 2.0 apkcombo download
-muksos ai launcher 2.0 appbrain download
-muksos ai launcher 2.0 gameloop download
-muksos ai launcher 2.0 apk size and version
-muksos ai launcher 2.0 content rating and developer
-muksos ai launcher 2.0 install and update
-muksos ai launcher 2.0 google play id and category
-muksos ai launcher 2.0 interact with phone in natural way
-muksos ai launcher 2.0 open apps and contacts with voice or text or gestures
-muksos ai launcher 2.0 web search wikipedia or google or youtube with voice or text or gestures
-muksos ai launcher 2.0 create amazing AI projects with smart connect feature
-muksos ai launcher 2.0 train your mobile for object recognition and actions with vision ability feature
-muksos ai launcher 2.0 works without internet with offline mode feature
-muksos ai launcher 2.0 change theme in a single tap with dark and light theme feature
-muksos ai launcher 2.0 hide unwanted and distracting bloatware with hide apps feature
-muksos ai launcher 2.0 save phone battery and optimize battery usage with power saver feature
-muksos ai launcher 2.0 teach voice commands, object recognition and actions with teachable feature
-muksos ai launcher 2.0 get direct access to your favorite app from home screen with favorite apps feature
-muksos ai launcher 2.0 write on home screen to open apps, make a call or web search with write on home screen feature
-muksos ai launcher 2.0 quickly access all your apps, contacts, web searches, reminders, alarm etc with voice access feature

-

User friendly

-

MuksOS AI Launcher 2.0 is user friendly and easy to use. You don't need to scroll pages to find contacts, apps, alarms, reminders, etc. You can access them directly from the home screen with simple commands.

-

Power saver

-

MuksOS AI Launcher 2.0 saves your phone battery and optimizes battery usage by using minimal resources and background processes.

-

Esthetic theme

-

MuksOS AI Launcher 2.0 comes with a cool neon glow icons theme that's sure to stand out on your device. You can also customize the theme according to your liking by changing the colors, icons, fonts, and wallpapers.

-

Dark and Light theme

-

MuksOS AI Launcher 2.0 supports both dark and light themes that you can switch between depending on the time of the day or your preference. The dark theme is ideal for night time or low-light conditions, while the light theme is suitable for daytime or bright conditions.

-

Works offline

-

MuksOS AI Launcher 2.0 works offline as well as online, so you don't need to worry about internet connectivity or data usage. You can use most of the features without any internet connection, such as opening apps, making calls, setting alarms, reminders, etc.

-

Favorite apps

-

MuksOS AI Launcher 2.0 lets you add your favorite apps to the home screen for quick and easy access. You can also create folders and categories to organize your apps according to your needs.

-

Hide apps

-

MuksOS AI Launcher 2.0 allows you to hide apps that you don't want others to see or access. You can use a password or a fingerprint to lock and unlock the hidden apps.

-

Premium Features of MuksOS AI Launcher 2.0

-

MuksOS AI Launcher 2.0 also has some premium features that you can unlock by purchasing the mod apk version of the app. These features include:

-

Write on Home Screen

-

This feature lets you write anything on your home screen using your finger or a stylus. You can use this feature to take notes, draw sketches, make lists, etc.

-

Voice Access

-

This feature lets you control your phone with your voice without touching it. You can use voice commands to open apps, make calls, search the web, play music, etc.

-

Speech reminders and Speech alarm

-

This feature lets you set reminders and alarms with your voice. You can also choose what you want to hear when the reminder or alarm goes off, such as a song, a quote, a joke, etc.

-

Smart connect

-

This feature lets you connect your phone with other devices using Bluetooth or Wi-Fi. You can use this feature to transfer files, share photos, play games, etc.

-

Vision ability

-

This feature lets you use your phone's camera as an AI tool for object recognition and detection. You can use this feature to identify objects, faces, colors, text, etc.

-

How to download MuksOS AI Launcher 2.0 mod apk?

-

If you want to download MuksOS AI Launcher 2.0 mod apk and enjoy its premium features for free, you can follow these steps:

-
    -
  1. Go to the official website of MuksOS AI Launcher 2.0 and click on the download button.
  2. -
  3. Allow unknown sources in your device settings to install the app from outside the Google Play Store.
  4. -
  5. Locate the downloaded file in your file manager and tap on it to install it.
  6. -
  7. Launch the app and grant it the necessary permissions to access your device features.
  8. -
  9. Enjoy using MuksOS AI Launcher 2.0 mod apk with all its features unlocked.
  10. -
-

Conclusion

-

MuksOS AI Launcher 2.0 is a smart and interactive android launcher that offers you a new and innovative way to interact with your phone. It has many features that make it stand out from other android launchers, such as teachable, fast and smooth, multiple voice options, 100 % privacy, user friendly, power saver, esthetic theme, dark and light theme, works offline, favorite apps, hide apps, etc. It also has some premium features that you can unlock by downloading the mod apk version of the app, such as write on home screen, voice access, speech reminders and speech alarm, smart connect, vision ability etc. If you are looking for a smart and interactive android launcher that combines the features of an app launcher, a virtual assistant and an AI tool for your DIY automation projects then MuksOS AI Launcher 2.0 is the perfect choice for you.

-

FAQs

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Experience GTA V Like Never Before with Online RP Launcher.md b/spaces/1phancelerku/anime-remove-background/Experience GTA V Like Never Before with Online RP Launcher.md deleted file mode 100644 index 9fc3106bb31166039865714b288cd531cd8965a4..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Experience GTA V Like Never Before with Online RP Launcher.md +++ /dev/null @@ -1,130 +0,0 @@ -
-

What is an online rp launcher and why you need one

-

If you are a fan of Grand Theft Auto (GTA) Online, you might have heard of online rp launchers. These are software applications that allow you to play GTA Online on customized dedicated servers, with different game modes, maps, vehicles, weapons, and more. Online rp launchers are also known as multiplayer modifications or frameworks, and they enable you to create or join your own GTA Online community.

-

Online rp launchers work by modifying the game files of GTA V, but without affecting your original installation or your access to GTA Online. This means that you can switch between GTA Online and online rp launchers without getting banned by Rockstar. Online rp launchers also use Rockstar's network code with improvements, so you can enjoy the best synchronization and performance possible.

-

online rp launcher


Download Ziphttps://jinyurl.com/2uNMx6



-

Online rp launchers are not only fun and exciting, but also creative and innovative. You can make anything you wish with online rp launchers, such as roleplay, drifting, racing, deathmatch, or something completely original. You can also use different programming languages to create your own scripts and resources for your server. Online rp launchers give you total control over your GTA Online experience.

-

How to choose the best online rp launcher for your needs

-

There are many online rp launchers available for GTA Online, but not all of them are created equal. Some online rp launchers may have more features, compatibility, or popularity than others. Here are some factors to consider when choosing the best online rp launcher for your needs:

-

Features

-

The features of an online rp launcher determine what you can do with it. Some online rp launchers may have more options for customization, streaming, AI, scripting, or hosting than others. For example, some online rp launchers may allow you to use custom cars, maps, weapons, and more dynamically, while others may require you to download them manually. Some online rp launchers may also have more support for different programming languages or tools than others.

-

Compatibility

-

The compatibility of an online rp launcher determines how well it works with your system and your game version. Some online rp launchers may have higher or lower system requirements than others. For example, some online rp launchers may require Windows 10 or a certain CPU or GPU to run smoothly. Some online rp launchers may also be more compatible with the latest updates or patches of GTA V than others.

-

Popularity

-

The popularity of an online rp launcher determines how many players and servers are using it. Some online rp launchers may have more active and diverse communities than others. For example, some online rp launchers may have more players or servers in your region or language than others. Some online rp launchers may also have more famous or reputable servers or streamers than others.

-

FiveM - the GTA V multiplayer modification you have dreamt of

-

One of the most popular and well-known online rp launchers is FiveM. FiveM is a modification for GTA V that enables you to play multiplayer on customized dedicated servers powered by Cfx.re. FiveM has been around since 2014 and has over 178k players playing right now.

-

online rp launcher for GTA V multiplayer
-online rp launcher for GTA SAMP on Android
-online rp launcher for RAGE MP mod
-online rp launcher for FiveM server hosting
-online rp launcher for GTA real life roleplay
-online rp launcher for GTA drifting and racing
-online rp launcher for GTA deathmatch and PvP
-online rp launcher for GTA open world sandbox
-online rp launcher for GTA custom cars and maps
-online rp launcher for GTA AI and sync quality
-online rp launcher for GTA source-available platform
-online rp launcher for GTA community-driven project
-online rp launcher for GTA Cfx.re framework
-online rp launcher for GTA multiple programming languages
-online rp launcher for GTA developer tools and resources
-online rp launcher for GTA net energy gain experiment
-online rp launcher for GTA holy grail fusion project
-online rp launcher for GTA mini Sun creation
-online rp launcher for GTA 100 million°C reactor
-online rp launcher for GTA Korea Superconducting Tokamak Advanced Research facility
-online rp launcher for GTA Korea Institute of Fusion Energy
-online rp launcher for GTA nuclear fusion reaction
-online rp launcher for GTA physics and engineering problem
-online rp launcher for GTA contributor program and rewards
-online rp launcher for GTA Rockstar Online Services validation
-online rp launcher for GTA game copy protection
-online rp launcher for GTA installation switcher
-online rp launcher for GTA ban prevention
-online rp launcher for GTA login information security
-online rp launcher for GTA multiplayer modification framework
-online rp launcher for GTA advanced and unique features
-online rp launcher for GTA creativity and personalization
-online rp launcher for GTA streaming and dynamic content
-online rp launcher for GTA Lua, C#, and JavaScript support
-online rp launcher for GTA web development knowledge and ecosystem
-online rp launcher for GTA solar core temperature comparison
-online rp launcher for GTA seven times hotter than the Sun achievement
-online rp launcher for GTA 15 million degrees kelvins measurement
-online rp launcher for GTA radiative zone and convection zone layers
-online rp launcher for GTA photosphere and chromosphere thicknesses
-online rp launcher for GTA sun spot cycle duration
-online rp launcher for GTA photosphere composition and elements
-online rp launcher for GTA solar atmosphere and surface gas pressure
-online rp launcher for GTA optical depth and effective temperature
-online rp launcher for GTA system requirements and specifications
-online rp launcher for GTA Intel Core CPU and NVIDIA GPU models
-online rapuncher.com domain name availability
-best online rapuncher reviews and ratings

-

FiveM has many features that make it stand out from other online rp launchers. Some of these features are:

- -

FiveM is compatible with Windows 7 or higher and the latest version of GTA V. FiveM also has a large and active community of players, servers, developers, and streamers. You can find more information about FiveM on their website or their Discord.

-

RAGE Multiplayer - fun, free and easy

-

Another popular and well-known online rp launcher is RAGE Multiplayer. RAGE Multiplayer is a modification for GTA V that enables you to play multiplayer on customized dedicated servers powered by RAGE Technology Group. RAGE Multiplayer has been around since 2017 and has over 15k players playing right now.

-

RAGE Multiplayer has many features that make it stand out from other online rp launchers. Some of these features are:

- -

RAGE Multiplayer is compatible with Windows 7 or higher and the latest version of GTA V. RAGE Multiplayer also has a large and active community of players, servers, developers, and streamers. You can find more information about RAGE Multiplayer on their website or their Discord.

-

How to play on GTA RP servers

-

GTA RP servers are one of the most popular types of online rp launchers. GTA RP stands for Grand Theft Auto Roleplay, which is a game mode where you create a character and live a virtual life in the GTA world. You can interact with other players, follow the laws, get a job, join a gang, or do whatever you want.

-

GTA RP servers are usually hosted by online rp launchers such as FiveM or RAGE Multiplayer. To play on GTA RP servers, you need to have GTA V installed on your PC and an online rp launcher of your choice. You also need to find a GTA RP server that suits your preferences and style. Some GTA RP servers may have different rules, themes, whitelists, applications, or requirements than others.

-

To join a GTA RP server, you need to follow these steps:

-
    -
  1. Launch your online rp launcher and select the server browser.
  2. -
  3. Search for a GTA RP server that you like and click on it.
  4. -
  5. Read the server's description, rules, website, Discord, or any other information provided by the server owner.
  6. -
  7. If the server requires an application or a whitelist, follow the instructions given by the server owner to apply or register.
  8. -
  9. If the server does not require an application or a whitelist, or if you have been accepted or whitelisted, click on connect to join the server.
  10. -
  11. Create your character and start roleplaying.
  12. -
-

GTA RP servers are fun and immersive ways to enjoy GTA Online with other players. You can make friends, enemies, allies, rivals, lovers, or anything else you can imagine. You can also explore different aspects of the GTA world that you may not have seen before. GTA RP servers are like living in your own GTA movie or TV show.

Tips and tricks for online rp launcher users

-

Online rp launchers are great ways to enhance your GTA Online experience, but they also come with some challenges and risks. Here are some tips and tricks for online rp launcher users to make the most out of their online rp launcher adventures:

-

Backup your game files

-

Before installing or using any online rp launcher, it is always a good idea to backup your game files. This way, you can restore your original GTA V installation in case something goes wrong or you want to play GTA Online again. You can backup your game files by copying the GTA V folder to another location on your PC or using a backup software.

-

Follow the server rules

-

When playing on any online rp launcher server, you should always follow the server rules and respect the other players. This is especially important for GTA RP servers, where you are expected to roleplay realistically and follow the server's theme and lore. Breaking the server rules or disrupting the roleplay can result in a kick, a ban, or a report from the server owner or the admins.

-

Update your online rp launcher regularly

-

Online rp launchers are constantly being updated and improved by their developers and communities. To enjoy the latest features, fixes, and enhancements, you should always update your online rp launcher regularly. You can check for updates on the online rp launcher's website, Discord, or launcher. You should also update your GTA V game whenever a new patch or update is released by Rockstar.

-

Use a VPN

-

Using a VPN (virtual private network) can help you protect your privacy and security when playing on online rp launcher servers. A VPN can hide your IP address and encrypt your data, making it harder for hackers, trackers, or malicious players to access your information or harm your PC. A VPN can also help you bypass geo-restrictions or firewalls that may prevent you from accessing certain online rp launcher servers.

-

Have fun

-

The most important tip for online rp launcher users is to have fun. Online rp launchers are meant to provide you with endless possibilities and opportunities to enjoy GTA Online in new and creative ways. You can explore different worlds, meet new people, create your own stories, or just have a blast. Online rp launchers are all about having fun.

-

Conclusion

-

Online rp launchers are software applications that allow you to play GTA Online on customized dedicated servers with different game modes, maps, vehicles, weapons, and more. Online rp launchers are also known as multiplayer modifications or frameworks, and they enable you to create or join your own GTA Online community.

-

There are many online rp launchers available for GTA Online, but some of the most popular and well-known ones are FiveM and RAGE Multiplayer. These online rp launchers have many features, compatibility, and popularity that make them stand out from other online rp launchers.

-

GTA RP servers are one of the most popular types of online rp launchers. GTA RP stands for Grand Theft Auto Roleplay, which is a game mode where you create a character and live a virtual life in the GTA world. You can interact with other players, follow the laws, get a job, join a gang, or do whatever you want.

-

To play on GTA RP servers, you need to have GTA V installed on your PC and an online rp launcher of your choice. You also need to find a GTA RP server that suits your preferences and style. Some GTA RP servers may have different rules, themes, whitelists, applications, or requirements than others.

-

To make the most out of your online rp launcher experience, you should follow some tips and tricks such as backing up your game files, following the server rules, updating your online rp launcher regularly, using a VPN, and having fun.

-

If you are looking for a new way to enjoy GTA Online with more freedom, creativity, and fun, you should definitely try online rp launchers. They will change the way you play GTA Online forever.

-

Frequently Asked Questions

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/A00001/bingothoo/src/lib/utils.ts b/spaces/A00001/bingothoo/src/lib/utils.ts deleted file mode 100644 index 0a09ddc4aa5518f681a00a64ad48566516f35417..0000000000000000000000000000000000000000 --- a/spaces/A00001/bingothoo/src/lib/utils.ts +++ /dev/null @@ -1,158 +0,0 @@ -import { clsx, type ClassValue } from 'clsx' -import { customAlphabet } from 'nanoid' -import { twMerge } from 'tailwind-merge' - -export function cn(...inputs: ClassValue[]) { - return twMerge(clsx(inputs)) -} - -export const nanoid = customAlphabet( - '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz', - 7 -) // 7-character random string - -export function createChunkDecoder() { - const decoder = new TextDecoder() - return function (chunk: Uint8Array | undefined): string { - if (!chunk) return '' - return decoder.decode(chunk, { stream: true }) - } -} - -export function random (start: number, end: number) { - return start + Math.ceil(Math.random() * (end - start)) -} - -export function randomIP() { - return `11.${random(104, 107)}.${random(1, 255)}.${random(1, 255)}` -} - -export const defaultUID = Math.random().toString(36).slice(2) - -export function parseHeadersFromCurl(content: string) { - const re = /-H '([^:]+):\s*([^']+)/mg - const headers: HeadersInit = {} - content = content.replaceAll('-H "', '-H \'').replaceAll('" ^', '\'\\').replaceAll('^\\^"', '"') // 将 cmd curl 转成 bash curl - content.replace(re, (_: string, key: string, value: string) => { - headers[key] = value - return '' - }) - - return headers -} - -export const ChunkKeys = ['BING_HEADER', 'BING_HEADER1', 'BING_HEADER2'] -export function encodeHeadersToCookie(content: string) { - const base64Content = btoa(content) - const contentChunks = base64Content.match(/.{1,4000}/g) || [] - return ChunkKeys.map((key, index) => `${key}=${contentChunks[index] ?? ''}`) -} - -export function extraCurlFromCookie(cookies: Partial<{ [key: string]: string }>) { - let base64Content = '' - ChunkKeys.forEach((key) => { - base64Content += (cookies[key] || '') - }) - try { - return atob(base64Content) - } catch(e) { - return '' - } -} - -export function extraHeadersFromCookie(cookies: Partial<{ [key: string]: string }>) { - return parseHeadersFromCurl(extraCurlFromCookie(cookies)) -} - -export function formatDate(input: string | number | Date): string { - const date = new Date(input) - return date.toLocaleDateString('en-US', { - month: 'long', - day: 'numeric', - year: 'numeric' - }) -} - -export function parseCookie(cookie: string, cookieName: string) { - const targetCookie = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`).test(cookie) ? RegExp.$1 : cookie - return targetCookie ? decodeURIComponent(targetCookie).trim() : cookie.indexOf('=') === -1 ? cookie.trim() : '' -} - -export function setCookie(key: string, value: string) { - const maxAge = 86400 * 30 - document.cookie = `${key}=${value || ''}; Path=/; Max-Age=${maxAge}; SameSite=None; Secure` -} - -export function getCookie(cookieName: string) { - const re = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`) - return re.test(document.cookie) ? RegExp.$1 : '' -} - -export function parseCookies(cookie: string, cookieNames: string[]) { - const cookies: { [key: string]: string } = {} - cookieNames.forEach(cookieName => { - cookies[cookieName] = parseCookie(cookie, cookieName) - }) - return cookies -} - -export const DEFAULT_UA = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0' -export const DEFAULT_IP = process.env.BING_IP || randomIP() - -export function parseUA(ua?: string, default_ua = DEFAULT_UA) { - return / EDGE?/i.test(decodeURIComponent(ua || '')) ? decodeURIComponent(ua!.trim()) : default_ua -} - -export function createHeaders(cookies: Partial<{ [key: string]: string }>, defaultHeaders?: Partial<{ [key: string]: string }>, type?: string) { - let { - BING_COOKIE = process.env.BING_COOKIE, - BING_UA = process.env.BING_UA, - BING_IP = process.env.BING_IP, - BING_HEADER = process.env.BING_HEADER, - IMAGE_ONLY = process.env.IMAGE_ONLY ?? '1', - } = cookies - - if (BING_HEADER) { - const headers = extraHeadersFromCookie({ - BING_HEADER, - ...cookies, - }) || {} - if (/^(1|true|yes)$/.test(String(IMAGE_ONLY)) && type !== 'image') { - // 仅画图时设置 cookie - headers.cookie = `_U=${defaultUID}` - } - if (headers['user-agent']) { - return headers - } - } - - const ua = parseUA(BING_UA) - - if (!BING_COOKIE) { - BING_COOKIE = defaultHeaders?.IMAGE_BING_COOKIE || defaultUID // hf 暂时不用 Cookie 也可以正常使用 - } - - const parsedCookie = parseCookie(BING_COOKIE, '_U') - if (!parsedCookie) { - throw new Error('Invalid Cookie') - } - return { - 'x-forwarded-for': BING_IP || DEFAULT_IP, - 'Accept-Encoding': 'gzip, deflate, br', - 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6', - 'User-Agent': ua!, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: `_U=${parsedCookie}` || '', - } -} - -export class WatchDog { - private tid = 0 - watch(fn: Function, timeout = 2000) { - clearTimeout(this.tid) - this.tid = setTimeout(fn, timeout + Math.random() * 1000) - } - reset() { - clearTimeout(this.tid) - } -} diff --git a/spaces/AIConsultant/MusicGen/audiocraft/utils/deadlock.py b/spaces/AIConsultant/MusicGen/audiocraft/utils/deadlock.py deleted file mode 100644 index 8abd1bbeea5909e664cf816c020bd7c37effdb66..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/utils/deadlock.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -from queue import Queue, Empty -import signal -import sys -import threading -import traceback - -logger = logging.getLogger(__name__) - - -class DeadlockDetect: - def __init__(self, use: bool = False, timeout: float = 120.): - self.use = use - self.timeout = timeout - self._queue: Queue = Queue() - - def update(self, stage: str): - if self.use: - self._queue.put(stage) - - def __enter__(self): - if self.use: - self._thread = threading.Thread(target=self._detector_thread) - self._thread.start() - - def __exit__(self, exc_type, exc_val, exc_tb): - if self.use: - self._queue.put(None) - self._thread.join() - - def _detector_thread(self): - logger.debug("Deadlock detector started") - last_stage = "init" - while True: - try: - stage = self._queue.get(timeout=self.timeout) - except Empty: - break - if stage is None: - logger.debug("Exiting deadlock detector thread") - return - else: - last_stage = stage - logger.error("Deadlock detector timed out, last stage was %s", last_stage) - for th in threading.enumerate(): - print(th, file=sys.stderr) - traceback.print_stack(sys._current_frames()[th.ident]) - print(file=sys.stderr) - sys.stdout.flush() - sys.stderr.flush() - os.kill(os.getpid(), signal.SIGKILL) diff --git a/spaces/AIZero2Hero4Health/5-ImageToLineDrawing-GR/app.py b/spaces/AIZero2Hero4Health/5-ImageToLineDrawing-GR/app.py deleted file mode 100644 index 5680950b2b2e4a9d5659e952867fca474eb890c3..0000000000000000000000000000000000000000 --- a/spaces/AIZero2Hero4Health/5-ImageToLineDrawing-GR/app.py +++ /dev/null @@ -1,126 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -import gradio as gr -from PIL import Image -import torchvision.transforms as transforms - -norm_layer = nn.InstanceNorm2d - -class ResidualBlock(nn.Module): - def __init__(self, in_features): - super(ResidualBlock, self).__init__() - - conv_block = [ nn.ReflectionPad2d(1), - nn.Conv2d(in_features, in_features, 3), - norm_layer(in_features), - nn.ReLU(inplace=True), - nn.ReflectionPad2d(1), - nn.Conv2d(in_features, in_features, 3), - norm_layer(in_features) - ] - - self.conv_block = nn.Sequential(*conv_block) - - def forward(self, x): - return x + self.conv_block(x) - - -class Generator(nn.Module): - def __init__(self, input_nc, output_nc, n_residual_blocks=9, sigmoid=True): - super(Generator, self).__init__() - - # Initial convolution block - model0 = [ nn.ReflectionPad2d(3), - nn.Conv2d(input_nc, 64, 7), - norm_layer(64), - nn.ReLU(inplace=True) ] - self.model0 = nn.Sequential(*model0) - - # Downsampling - model1 = [] - in_features = 64 - out_features = in_features*2 - for _ in range(2): - model1 += [ nn.Conv2d(in_features, out_features, 3, stride=2, padding=1), - norm_layer(out_features), - nn.ReLU(inplace=True) ] - in_features = out_features - out_features = in_features*2 - self.model1 = nn.Sequential(*model1) - - model2 = [] - # Residual blocks - for _ in range(n_residual_blocks): - model2 += [ResidualBlock(in_features)] - self.model2 = nn.Sequential(*model2) - - # Upsampling - model3 = [] - out_features = in_features//2 - for _ in range(2): - model3 += [ nn.ConvTranspose2d(in_features, out_features, 3, stride=2, padding=1, output_padding=1), - norm_layer(out_features), - nn.ReLU(inplace=True) ] - in_features = out_features - out_features = in_features//2 - self.model3 = nn.Sequential(*model3) - - # Output layer - model4 = [ nn.ReflectionPad2d(3), - nn.Conv2d(64, output_nc, 7)] - if sigmoid: - model4 += [nn.Sigmoid()] - - self.model4 = nn.Sequential(*model4) - - def forward(self, x, cond=None): - out = self.model0(x) - out = self.model1(out) - out = self.model2(out) - out = self.model3(out) - out = self.model4(out) - - return out - -model1 = Generator(3, 1, 3) -model1.load_state_dict(torch.load('model.pth', map_location=torch.device('cpu'))) -model1.eval() - -model2 = Generator(3, 1, 3) -model2.load_state_dict(torch.load('model2.pth', map_location=torch.device('cpu'))) -model2.eval() - -def predict(input_img, ver): - input_img = Image.open(input_img) - transform = transforms.Compose([transforms.Resize(256, Image.BICUBIC), transforms.ToTensor()]) - input_img = transform(input_img) - input_img = torch.unsqueeze(input_img, 0) - - drawing = 0 - with torch.no_grad(): - if ver == 'Simple Lines': - drawing = model2(input_img)[0].detach() - else: - drawing = model1(input_img)[0].detach() - - drawing = transforms.ToPILImage()(drawing) - return drawing - -title="Image to Line Drawings - Complex and Simple Portraits and Landscapes" -examples=[ -['01.jpeg', 'Simple Lines'], ['02.jpeg', 'Simple Lines'], ['03.jpeg', 'Simple Lines'], -['07.jpeg', 'Complex Lines'], ['08.jpeg', 'Complex Lines'], ['09.jpeg', 'Complex Lines'], -['10.jpeg', 'Simple Lines'], ['11.jpeg', 'Simple Lines'], ['12.jpeg', 'Simple Lines'], -['01.jpeg', 'Complex Lines'], ['02.jpeg', 'Complex Lines'], ['03.jpeg', 'Complex Lines'], -['04.jpeg', 'Simple Lines'], ['05.jpeg', 'Simple Lines'], ['06.jpeg', 'Simple Lines'], -['07.jpeg', 'Simple Lines'], ['08.jpeg', 'Simple Lines'], ['09.jpeg', 'Simple Lines'], -['04.jpeg', 'Complex Lines'], ['05.jpeg', 'Complex Lines'], ['06.jpeg', 'Complex Lines'], -['10.jpeg', 'Complex Lines'], ['11.jpeg', 'Complex Lines'], ['12.jpeg', 'Complex Lines'] -] - -iface = gr.Interface(predict, [gr.inputs.Image(type='filepath'), - gr.inputs.Radio(['Complex Lines','Simple Lines'], type="value", default='Simple Lines', label='version')], - gr.outputs.Image(type="pil"), title=title,examples=examples) - -iface.launch() \ No newline at end of file diff --git a/spaces/Abubakari/Sales_Prediction/README.md b/spaces/Abubakari/Sales_Prediction/README.md deleted file mode 100644 index 6e43c21a356f1076322b51b0ff4b9761facde5db..0000000000000000000000000000000000000000 --- a/spaces/Abubakari/Sales_Prediction/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sales Prediction -emoji: 💻 -colorFrom: blue -colorTo: red -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/ChatgptX.py b/spaces/AchyuthGamer/OpenGPT/g4f/Provider/ChatgptX.py deleted file mode 100644 index 2944fb264ae78dd3502e20e28233da21799e467e..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT/g4f/Provider/ChatgptX.py +++ /dev/null @@ -1,97 +0,0 @@ -from __future__ import annotations - -import re -import json - -from aiohttp import ClientSession -from ..typing import AsyncResult, Messages -from .base_provider import AsyncGeneratorProvider -from .helper import format_prompt - - -class ChatgptX(AsyncGeneratorProvider): - url = "https://chatgptx.de" - supports_gpt_35_turbo = True - working = True - - @classmethod - async def create_async_generator( - cls, - model: str, - messages: Messages, - **kwargs - ) -> AsyncResult: - headers = { - 'accept-language': 'de-DE,de;q=0.9,en-DE;q=0.8,en;q=0.7,en-US', - 'sec-ch-ua': '"Google Chrome";v="117", "Not;A=Brand";v="8", "Chromium";v="117"', - 'sec-ch-ua-mobile': '?0', - 'sec-ch-ua-platform': 'Linux', - 'sec-fetch-dest': 'empty', - 'sec-fetch-mode': 'cors', - 'sec-fetch-site': 'same-origin', - 'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/117.0.0.0 Safari/537.36', - } - async with ClientSession(headers=headers) as session: - async with session.get(f"{cls.url}/") as response: - response = await response.text() - result = re.search(r'DDIM -class DDIMSchedulerOutput(BaseOutput): - """ - Output class for the scheduler's step function output. - - Args: - prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - The predicted denoised sample (x_{0}) based on the model output from the current timestep. - `pred_original_sample` can be used to preview progress or for guidance. - """ - - prev_sample: torch.FloatTensor - pred_original_sample: Optional[torch.FloatTensor] = None - - -# Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar -def betas_for_alpha_bar( - num_diffusion_timesteps, - max_beta=0.999, - alpha_transform_type="cosine", -): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of - (1-beta) over time from t = [0,1]. - - Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up - to that part of the diffusion process. - - - Args: - num_diffusion_timesteps (`int`): the number of betas to produce. - max_beta (`float`): the maximum beta to use; use values lower than 1 to - prevent singularities. - alpha_transform_type (`str`, *optional*, default to `cosine`): the type of noise schedule for alpha_bar. - Choose from `cosine` or `exp` - - Returns: - betas (`np.ndarray`): the betas used by the scheduler to step the model outputs - """ - if alpha_transform_type == "cosine": - - def alpha_bar_fn(t): - return math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2 - - elif alpha_transform_type == "exp": - - def alpha_bar_fn(t): - return math.exp(t * -12.0) - - else: - raise ValueError(f"Unsupported alpha_tranform_type: {alpha_transform_type}") - - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar_fn(t2) / alpha_bar_fn(t1), max_beta)) - return torch.tensor(betas, dtype=torch.float32) - - -# Copied from diffusers.schedulers.scheduling_ddim.rescale_zero_terminal_snr -def rescale_zero_terminal_snr(betas): - """ - Rescales betas to have zero terminal SNR Based on https://arxiv.org/pdf/2305.08891.pdf (Algorithm 1) - - - Args: - betas (`torch.FloatTensor`): - the betas that the scheduler is being initialized with. - - Returns: - `torch.FloatTensor`: rescaled betas with zero terminal SNR - """ - # Convert betas to alphas_bar_sqrt - alphas = 1.0 - betas - alphas_cumprod = torch.cumprod(alphas, dim=0) - alphas_bar_sqrt = alphas_cumprod.sqrt() - - # Store old values. - alphas_bar_sqrt_0 = alphas_bar_sqrt[0].clone() - alphas_bar_sqrt_T = alphas_bar_sqrt[-1].clone() - - # Shift so the last timestep is zero. - alphas_bar_sqrt -= alphas_bar_sqrt_T - - # Scale so the first timestep is back to the old value. - alphas_bar_sqrt *= alphas_bar_sqrt_0 / (alphas_bar_sqrt_0 - alphas_bar_sqrt_T) - - # Convert alphas_bar_sqrt to betas - alphas_bar = alphas_bar_sqrt**2 # Revert sqrt - alphas = alphas_bar[1:] / alphas_bar[:-1] # Revert cumprod - alphas = torch.cat([alphas_bar[0:1], alphas]) - betas = 1 - alphas - - return betas - - -class DDIMInverseScheduler(SchedulerMixin, ConfigMixin): - """ - DDIMInverseScheduler is the reverse scheduler of [`DDIMScheduler`]. - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - For more details, see the original paper: https://arxiv.org/abs/2010.02502 - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear`, `scaled_linear`, or `squaredcos_cap_v2`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - clip_sample (`bool`, default `True`): - option to clip predicted sample for numerical stability. - clip_sample_range (`float`, default `1.0`): - the maximum magnitude for sample clipping. Valid only when `clip_sample=True`. - set_alpha_to_zero (`bool`, default `True`): - each diffusion step uses the value of alphas product at that step and at the previous one. For the final - step there is no previous alpha. When this option is `True` the previous alpha product is fixed to `0`, - otherwise it uses the value of alpha at step `num_train_timesteps - 1`. - steps_offset (`int`, default `0`): - an offset added to the inference steps. You can use a combination of `offset=1` and - `set_alpha_to_zero=False`, to make the last step use step `num_train_timesteps - 1` for the previous alpha - product. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - timestep_spacing (`str`, default `"leading"`): - The way the timesteps should be scaled. Refer to Table 2. of [Common Diffusion Noise Schedules and Sample - Steps are Flawed](https://arxiv.org/abs/2305.08891) for more information. - rescale_betas_zero_snr (`bool`, default `False`): - whether to rescale the betas to have zero terminal SNR (proposed by https://arxiv.org/pdf/2305.08891.pdf). - This can enable the model to generate very bright and dark samples instead of limiting it to samples with - medium brightness. Loosely related to - [`--offset_noise`](https://github.com/huggingface/diffusers/blob/74fd735eb073eb1d774b1ab4154a0876eb82f055/examples/dreambooth/train_dreambooth.py#L506). - """ - - order = 1 - ignore_for_config = ["kwargs"] - _deprecated_kwargs = ["set_alpha_to_zero"] - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - clip_sample: bool = True, - set_alpha_to_one: bool = True, - steps_offset: int = 0, - prediction_type: str = "epsilon", - clip_sample_range: float = 1.0, - timestep_spacing: str = "leading", - rescale_betas_zero_snr: bool = False, - **kwargs, - ): - if kwargs.get("set_alpha_to_zero", None) is not None: - deprecation_message = ( - "The `set_alpha_to_zero` argument is deprecated. Please use `set_alpha_to_one` instead." - ) - deprecate("set_alpha_to_zero", "1.0.0", deprecation_message, standard_warn=False) - set_alpha_to_one = kwargs["set_alpha_to_zero"] - if trained_betas is not None: - self.betas = torch.tensor(trained_betas, dtype=torch.float32) - elif beta_schedule == "linear": - self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32) - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = ( - torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2 - ) - elif beta_schedule == "squaredcos_cap_v2": - # Glide cosine schedule - self.betas = betas_for_alpha_bar(num_train_timesteps) - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - # Rescale for zero SNR - if rescale_betas_zero_snr: - self.betas = rescale_zero_terminal_snr(self.betas) - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = torch.cumprod(self.alphas, dim=0) - - # At every step in inverted ddim, we are looking into the next alphas_cumprod - # For the initial step, there is no current alphas_cumprod, and the index is out of bounds - # `set_alpha_to_one` decides whether we set this parameter simply to one - # in this case, self.step() just output the predicted noise - # or whether we use the initial alpha used in training the diffusion model. - self.initial_alpha_cumprod = torch.tensor(1.0) if set_alpha_to_one else self.alphas_cumprod[0] - - # standard deviation of the initial noise distribution - self.init_noise_sigma = 1.0 - - # setable values - self.num_inference_steps = None - self.timesteps = torch.from_numpy(np.arange(0, num_train_timesteps).copy().astype(np.int64)) - - # Copied from diffusers.schedulers.scheduling_ddim.DDIMScheduler.scale_model_input - def scale_model_input(self, sample: torch.FloatTensor, timestep: Optional[int] = None) -> torch.FloatTensor: - """ - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - - Args: - sample (`torch.FloatTensor`): input sample - timestep (`int`, optional): current timestep - - Returns: - `torch.FloatTensor`: scaled input sample - """ - return sample - - def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None): - """ - Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - """ - - if num_inference_steps > self.config.num_train_timesteps: - raise ValueError( - f"`num_inference_steps`: {num_inference_steps} cannot be larger than `self.config.train_timesteps`:" - f" {self.config.num_train_timesteps} as the unet model trained with this scheduler can only handle" - f" maximal {self.config.num_train_timesteps} timesteps." - ) - - self.num_inference_steps = num_inference_steps - - # "leading" and "trailing" corresponds to annotation of Table 1. of https://arxiv.org/abs/2305.08891 - if self.config.timestep_spacing == "leading": - step_ratio = self.config.num_train_timesteps // self.num_inference_steps - # creates integer timesteps by multiplying by ratio - # casting to int to avoid issues when num_inference_step is power of 3 - timesteps = (np.arange(0, num_inference_steps) * step_ratio).round().copy().astype(np.int64) - timesteps += self.config.steps_offset - elif self.config.timestep_spacing == "trailing": - step_ratio = self.config.num_train_timesteps / self.num_inference_steps - # creates integer timesteps by multiplying by ratio - # casting to int to avoid issues when num_inference_step is power of 3 - timesteps = np.round(np.arange(self.config.num_train_timesteps, 0, -step_ratio)[::-1]).astype(np.int64) - timesteps -= 1 - else: - raise ValueError( - f"{self.config.timestep_spacing} is not supported. Please make sure to choose one of 'leading' or 'trailing'." - ) - - # Roll timesteps array by one to reflect reversed origin and destination semantics for each step - timesteps = np.roll(timesteps, 1) - timesteps[0] = int(timesteps[1] - step_ratio) - self.timesteps = torch.from_numpy(timesteps).to(device) - - def step( - self, - model_output: torch.FloatTensor, - timestep: int, - sample: torch.FloatTensor, - eta: float = 0.0, - use_clipped_model_output: bool = False, - variance_noise: Optional[torch.FloatTensor] = None, - return_dict: bool = True, - ) -> Union[DDIMSchedulerOutput, Tuple]: - # 1. get previous step value (=t+1) - prev_timestep = timestep + self.config.num_train_timesteps // self.num_inference_steps - - # 2. compute alphas, betas - # change original implementation to exactly match noise levels for analogous forward process - alpha_prod_t = self.alphas_cumprod[timestep] if timestep >= 0 else self.initial_alpha_cumprod - alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] - - beta_prod_t = 1 - alpha_prod_t - - # 3. compute predicted original sample from predicted noise also called - # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf - if self.config.prediction_type == "epsilon": - pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5) - pred_epsilon = model_output - elif self.config.prediction_type == "sample": - pred_original_sample = model_output - pred_epsilon = (sample - alpha_prod_t ** (0.5) * pred_original_sample) / beta_prod_t ** (0.5) - elif self.config.prediction_type == "v_prediction": - pred_original_sample = (alpha_prod_t**0.5) * sample - (beta_prod_t**0.5) * model_output - pred_epsilon = (alpha_prod_t**0.5) * model_output + (beta_prod_t**0.5) * sample - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, or" - " `v_prediction`" - ) - - # 4. Clip or threshold "predicted x_0" - if self.config.clip_sample: - pred_original_sample = pred_original_sample.clamp( - -self.config.clip_sample_range, self.config.clip_sample_range - ) - - # 5. compute "direction pointing to x_t" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf - pred_sample_direction = (1 - alpha_prod_t_prev) ** (0.5) * pred_epsilon - - # 6. compute x_t without "random noise" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf - prev_sample = alpha_prod_t_prev ** (0.5) * pred_original_sample + pred_sample_direction - - if not return_dict: - return (prev_sample, pred_original_sample) - return DDIMSchedulerOutput(prev_sample=prev_sample, pred_original_sample=pred_original_sample) - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/Andy1621/uniformer_image_detection/configs/instaboost/README.md b/spaces/Andy1621/uniformer_image_detection/configs/instaboost/README.md deleted file mode 100644 index 5ab74a1af13639fef753dbfd43f064400cba9129..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/instaboost/README.md +++ /dev/null @@ -1,44 +0,0 @@ -# InstaBoost for MMDetection - -[ALGORITHM] - -Configs in this directory is the implementation for ICCV2019 paper "InstaBoost: Boosting Instance Segmentation Via Probability Map Guided Copy-Pasting" and provided by the authors of the paper. InstaBoost is a data augmentation method for object detection and instance segmentation. The paper has been released on [`arXiv`](https://arxiv.org/abs/1908.07801). - -```latex -@inproceedings{fang2019instaboost, - title={Instaboost: Boosting instance segmentation via probability map guided copy-pasting}, - author={Fang, Hao-Shu and Sun, Jianhua and Wang, Runzhong and Gou, Minghao and Li, Yong-Lu and Lu, Cewu}, - booktitle={Proceedings of the IEEE International Conference on Computer Vision}, - pages={682--691}, - year={2019} -} -``` - -## Usage - -### Requirements - -You need to install `instaboostfast` before using it. - -```shell -pip install instaboostfast -``` - -The code and more details can be found [here](https://github.com/GothicAi/Instaboost). - -### Integration with MMDetection - -InstaBoost have been already integrated in the data pipeline, thus all you need is to add or change **InstaBoost** configurations after **LoadImageFromFile**. We have provided examples like [this](mask_rcnn_r50_fpn_instaboost_4x#L121). You can refer to [`InstaBoostConfig`](https://github.com/GothicAi/InstaBoost-pypi#instaboostconfig) for more details. - -## Results and Models - -- All models were trained on `coco_2017_train` and tested on `coco_2017_val` for conveinience of evaluation and comparison. In the paper, the results are obtained from `test-dev`. -- To balance accuracy and training time when using InstaBoost, models released in this page are all trained for 48 Epochs. Other training and testing configs strictly follow the original framework. -- For results and models in MMDetection V1.x, please refer to [Instaboost](https://github.com/GothicAi/Instaboost). - -| Network | Backbone | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download | -| :-------------: | :--------: | :-----: | :------: | :------------: | :------:| :-----: | :------: | :-----------------: | -| Mask R-CNN | R-50-FPN | 4x | 4.4 | 17.5 | 40.6 | 36.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/instaboost/mask_rcnn_r50_fpn_instaboost_4x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/instaboost/mask_rcnn_r50_fpn_instaboost_4x_coco/mask_rcnn_r50_fpn_instaboost_4x_coco_20200307-d025f83a.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/instaboost/mask_rcnn_r50_fpn_instaboost_4x_coco/mask_rcnn_r50_fpn_instaboost_4x_coco_20200307_223635.log.json) | -| Mask R-CNN | R-101-FPN | 4x | 6.4 | | 42.5 | 38.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/instaboost/mask_rcnn_r101_fpn_instaboost_4x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/instaboost/mask_rcnn_r101_fpn_instaboost_4x_coco/mask_rcnn_r101_fpn_instaboost_4x_coco_20200703_235738-f23f3a5f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/instaboost/mask_rcnn_r101_fpn_instaboost_4x_coco/mask_rcnn_r101_fpn_instaboost_4x_coco_20200703_235738.log.json) | -| Mask R-CNN | X-101-64x4d-FPN | 4x | 10.7 | | 44.7 | 39.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/instaboost/mask_rcnn_x101_64x4d_fpn_instaboost_4x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/instaboost/mask_rcnn_x101_64x4d_fpn_instaboost_4x_coco/mask_rcnn_x101_64x4d_fpn_instaboost_4x_coco_20200515_080947-8ed58c1b.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/instaboost/mask_rcnn_x101_64x4d_fpn_instaboost_4x_coco/mask_rcnn_x101_64x4d_fpn_instaboost_4x_coco_20200515_080947.log.json) | -| Cascade R-CNN | R-101-FPN | 4x | 6.0 | 12.0 | 43.7 | 38.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/instaboost/cascade_mask_rcnn_r50_fpn_instaboost_4x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/instaboost/cascade_mask_rcnn_r50_fpn_instaboost_4x_coco/cascade_mask_rcnn_r50_fpn_instaboost_4x_coco_20200307-c19d98d9.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/instaboost/cascade_mask_rcnn_r50_fpn_instaboost_4x_coco/cascade_mask_rcnn_r50_fpn_instaboost_4x_coco_20200307_223646.log.json) | diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_160k_ade20k.py deleted file mode 100644 index 9e43af541f6e3df3f36479e736bb0c03fc916970..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './ann_r50-d8_512x512_160k_ade20k.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/update_windows.bat b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/update_windows.bat deleted file mode 100644 index 0d8f815272c5eec8714ef1adc1a23d547d6bf62d..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/update_windows.bat +++ /dev/null @@ -1,37 +0,0 @@ -@echo off - -cd /D "%~dp0" - -set PATH=%PATH%;%SystemRoot%\system32 - -echo "%CD%"| findstr /C:" " >nul && echo This script relies on Miniconda which can not be silently installed under a path with spaces. && goto end - -@rem fix failed install when installing to a separate drive -set TMP=%cd%\installer_files -set TEMP=%cd%\installer_files - -@rem deactivate existing conda envs as needed to avoid conflicts -(call conda deactivate && call conda deactivate && call conda deactivate) 2>nul - -@rem config -set CONDA_ROOT_PREFIX=%cd%\installer_files\conda -set INSTALL_ENV_DIR=%cd%\installer_files\env - -@rem environment isolation -set PYTHONNOUSERSITE=1 -set PYTHONPATH= -set PYTHONHOME= -set "CUDA_PATH=%INSTALL_ENV_DIR%" -set "CUDA_HOME=%CUDA_PATH%" - -@rem activate installer env -call "%CONDA_ROOT_PREFIX%\condabin\conda.bat" activate "%INSTALL_ENV_DIR%" || ( echo. && echo Miniconda hook not found. && goto end ) - -@rem update installer env -call python one_click.py --update && ( - echo. - echo Done! -) - -:end -pause diff --git a/spaces/Apex-X/ROOPOK/roop/core.py b/spaces/Apex-X/ROOPOK/roop/core.py deleted file mode 100644 index ecde46e9747ca7bcfb7aca9499977b7b2aae88fd..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/ROOPOK/roop/core.py +++ /dev/null @@ -1,215 +0,0 @@ -import os -import sys -# single thread doubles cuda performance - needs to be set before torch import -if any(arg.startswith('--execution-provider') for arg in sys.argv): - os.environ['OMP_NUM_THREADS'] = '1' -# reduce tensorflow log level -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' -import warnings -from typing import List -import platform -import signal -import shutil -import argparse -import torch -import onnxruntime -import tensorflow - -import roop.globals -import roop.metadata -import roop.ui as ui -from roop.predicter import predict_image, predict_video -from roop.processors.frame.core import get_frame_processors_modules -from roop.utilities import has_image_extension, is_image, is_video, detect_fps, create_video, extract_frames, get_temp_frame_paths, restore_audio, create_temp, move_temp, clean_temp, normalize_output_path - -if 'ROCMExecutionProvider' in roop.globals.execution_providers: - del torch - -warnings.filterwarnings('ignore', category=FutureWarning, module='insightface') -warnings.filterwarnings('ignore', category=UserWarning, module='torchvision') - - -def parse_args() -> None: - signal.signal(signal.SIGINT, lambda signal_number, frame: destroy()) - program = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog, max_help_position=100)) - program.add_argument('-s', '--source', help='select an source image', dest='source_path') - program.add_argument('-t', '--target', help='select an target image or video', dest='target_path') - program.add_argument('-o', '--output', help='select output file or directory', dest='output_path') - program.add_argument('--frame-processor', help='frame processors (choices: face_swapper, face_enhancer, ...)', dest='frame_processor', default=['face_swapper'], nargs='+') - program.add_argument('--keep-fps', help='keep original fps', dest='keep_fps', action='store_true', default=False) - program.add_argument('--keep-audio', help='keep original audio', dest='keep_audio', action='store_true', default=True) - program.add_argument('--keep-frames', help='keep temporary frames', dest='keep_frames', action='store_true', default=False) - program.add_argument('--many-faces', help='process every face', dest='many_faces', action='store_true', default=False) - program.add_argument('--video-encoder', help='adjust output video encoder', dest='video_encoder', default='libx264', choices=['libx264', 'libx265', 'libvpx-vp9']) - program.add_argument('--video-quality', help='adjust output video quality', dest='video_quality', type=int, default=18, choices=range(52), metavar='[0-51]') - program.add_argument('--max-memory', help='maximum amount of RAM in GB', dest='max_memory', type=int, default=suggest_max_memory()) - program.add_argument('--execution-provider', help='available execution provider (choices: cpu, ...)', dest='execution_provider', default=['cpu'], choices=suggest_execution_providers(), nargs='+') - program.add_argument('--execution-threads', help='number of execution threads', dest='execution_threads', type=int, default=suggest_execution_threads()) - program.add_argument('-v', '--version', action='version', version=f'{roop.metadata.name} {roop.metadata.version}') - - args = program.parse_args() - - roop.globals.source_path = args.source_path - roop.globals.target_path = args.target_path - roop.globals.output_path = normalize_output_path(roop.globals.source_path, roop.globals.target_path, args.output_path) - roop.globals.frame_processors = args.frame_processor - roop.globals.headless = args.source_path or args.target_path or args.output_path - roop.globals.keep_fps = args.keep_fps - roop.globals.keep_audio = args.keep_audio - roop.globals.keep_frames = args.keep_frames - roop.globals.many_faces = args.many_faces - roop.globals.video_encoder = args.video_encoder - roop.globals.video_quality = args.video_quality - roop.globals.max_memory = args.max_memory - roop.globals.execution_providers = decode_execution_providers(args.execution_provider) - roop.globals.execution_threads = args.execution_threads - - -def encode_execution_providers(execution_providers: List[str]) -> List[str]: - return [execution_provider.replace('ExecutionProvider', '').lower() for execution_provider in execution_providers] - - -def decode_execution_providers(execution_providers: List[str]) -> List[str]: - return [provider for provider, encoded_execution_provider in zip(onnxruntime.get_available_providers(), encode_execution_providers(onnxruntime.get_available_providers())) - if any(execution_provider in encoded_execution_provider for execution_provider in execution_providers)] - - -def suggest_max_memory() -> int: - if platform.system().lower() == 'darwin': - return 4 - return 16 - - -def suggest_execution_providers() -> List[str]: - return encode_execution_providers(onnxruntime.get_available_providers()) - - -def suggest_execution_threads() -> int: - if 'DmlExecutionProvider' in roop.globals.execution_providers: - return 1 - if 'ROCMExecutionProvider' in roop.globals.execution_providers: - return 1 - return 8 - - -def limit_resources() -> None: - # prevent tensorflow memory leak - gpus = tensorflow.config.experimental.list_physical_devices('GPU') - for gpu in gpus: - tensorflow.config.experimental.set_virtual_device_configuration(gpu, [ - tensorflow.config.experimental.VirtualDeviceConfiguration(memory_limit=1024) - ]) - # limit memory usage - if roop.globals.max_memory: - memory = roop.globals.max_memory * 1024 ** 3 - if platform.system().lower() == 'darwin': - memory = roop.globals.max_memory * 1024 ** 6 - if platform.system().lower() == 'windows': - import ctypes - kernel32 = ctypes.windll.kernel32 - kernel32.SetProcessWorkingSetSize(-1, ctypes.c_size_t(memory), ctypes.c_size_t(memory)) - else: - import resource - resource.setrlimit(resource.RLIMIT_DATA, (memory, memory)) - - -def release_resources() -> None: - if 'CUDAExecutionProvider' in roop.globals.execution_providers: - torch.cuda.empty_cache() - - -def pre_check() -> bool: - if sys.version_info < (3, 9): - update_status('Python version is not supported - please upgrade to 3.9 or higher.') - return False - if not shutil.which('ffmpeg'): - update_status('ffmpeg is not installed.') - return False - return True - - -def update_status(message: str, scope: str = 'ROOP.CORE') -> None: - print(f'[{scope}] {message}') - if not roop.globals.headless: - ui.update_status(message) - - -def start() -> None: - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - if not frame_processor.pre_start(): - return - # process image to image - if has_image_extension(roop.globals.target_path): - if predict_image(roop.globals.target_path): - destroy() - shutil.copy2(roop.globals.target_path, roop.globals.output_path) - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - update_status('Progressing...', frame_processor.NAME) - frame_processor.process_image(roop.globals.source_path, roop.globals.output_path, roop.globals.output_path) - frame_processor.post_process() - release_resources() - if is_image(roop.globals.target_path): - update_status('Processing to image succeed!') - else: - update_status('Processing to image failed!') - return - # process image to videos - if predict_video(roop.globals.target_path): - destroy() - update_status('Creating temp resources...') - create_temp(roop.globals.target_path) - update_status('Extracting frames...') - extract_frames(roop.globals.target_path) - temp_frame_paths = get_temp_frame_paths(roop.globals.target_path) - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - update_status('Progressing...', frame_processor.NAME) - frame_processor.process_video(roop.globals.source_path, temp_frame_paths) - frame_processor.post_process() - release_resources() - # handles fps - if roop.globals.keep_fps: - update_status('Detecting fps...') - fps = detect_fps(roop.globals.target_path) - update_status(f'Creating video with {fps} fps...') - create_video(roop.globals.target_path, fps) - else: - update_status('Creating video with 30.0 fps...') - create_video(roop.globals.target_path) - # handle audio - if roop.globals.keep_audio: - if roop.globals.keep_fps: - update_status('Restoring audio...') - else: - update_status('Restoring audio might cause issues as fps are not kept...') - restore_audio(roop.globals.target_path, roop.globals.output_path) - else: - move_temp(roop.globals.target_path, roop.globals.output_path) - # clean and validate - clean_temp(roop.globals.target_path) - if is_video(roop.globals.target_path): - update_status('Processing to video succeed!') - else: - update_status('Processing to video failed!') - - -def destroy() -> None: - if roop.globals.target_path: - clean_temp(roop.globals.target_path) - quit() - - -def run() -> None: - parse_args() - if not pre_check(): - return - for frame_processor in get_frame_processors_modules(roop.globals.frame_processors): - if not frame_processor.pre_check(): - return - limit_resources() - if roop.globals.headless: - start() - else: - window = ui.init(start, destroy) - window.mainloop() - - \ No newline at end of file diff --git a/spaces/Aristo/trafficsign/README.md b/spaces/Aristo/trafficsign/README.md deleted file mode 100644 index a6e364cf875766a02d8083ff51ce45b846106c80..0000000000000000000000000000000000000000 --- a/spaces/Aristo/trafficsign/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Trafficsign -emoji: 🏃 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/dir_util.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/dir_util.py deleted file mode 100644 index 6f0bb8ad76a064dad843db670c91e493d0e19a0c..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/dir_util.py +++ /dev/null @@ -1,243 +0,0 @@ -"""distutils.dir_util - -Utility functions for manipulating directories and directory trees.""" - -import os -import errno -from distutils.errors import DistutilsInternalError, DistutilsFileError -from distutils import log - -# cache for by mkpath() -- in addition to cheapening redundant calls, -# eliminates redundant "creating /foo/bar/baz" messages in dry-run mode -_path_created = {} - - -def mkpath(name, mode=0o777, verbose=1, dry_run=0): # noqa: C901 - """Create a directory and any missing ancestor directories. - - If the directory already exists (or if 'name' is the empty string, which - means the current directory, which of course exists), then do nothing. - Raise DistutilsFileError if unable to create some directory along the way - (eg. some sub-path exists, but is a file rather than a directory). - If 'verbose' is true, print a one-line summary of each mkdir to stdout. - Return the list of directories actually created. - - os.makedirs is not used because: - - a) It's new to Python 1.5.2, and - b) it blows up if the directory already exists (in which case it should - silently succeed). - """ - - global _path_created - - # Detect a common bug -- name is None - if not isinstance(name, str): - raise DistutilsInternalError( - "mkpath: 'name' must be a string (got {!r})".format(name) - ) - - # XXX what's the better way to handle verbosity? print as we create - # each directory in the path (the current behaviour), or only announce - # the creation of the whole path? (quite easy to do the latter since - # we're not using a recursive algorithm) - - name = os.path.normpath(name) - created_dirs = [] - if os.path.isdir(name) or name == '': - return created_dirs - if _path_created.get(os.path.abspath(name)): - return created_dirs - - (head, tail) = os.path.split(name) - tails = [tail] # stack of lone dirs to create - - while head and tail and not os.path.isdir(head): - (head, tail) = os.path.split(head) - tails.insert(0, tail) # push next higher dir onto stack - - # now 'head' contains the deepest directory that already exists - # (that is, the child of 'head' in 'name' is the highest directory - # that does *not* exist) - for d in tails: - # print "head = %s, d = %s: " % (head, d), - head = os.path.join(head, d) - abs_head = os.path.abspath(head) - - if _path_created.get(abs_head): - continue - - if verbose >= 1: - log.info("creating %s", head) - - if not dry_run: - try: - os.mkdir(head, mode) - except OSError as exc: - if not (exc.errno == errno.EEXIST and os.path.isdir(head)): - raise DistutilsFileError( - "could not create '{}': {}".format(head, exc.args[-1]) - ) - created_dirs.append(head) - - _path_created[abs_head] = 1 - return created_dirs - - -def create_tree(base_dir, files, mode=0o777, verbose=1, dry_run=0): - """Create all the empty directories under 'base_dir' needed to put 'files' - there. - - 'base_dir' is just the name of a directory which doesn't necessarily - exist yet; 'files' is a list of filenames to be interpreted relative to - 'base_dir'. 'base_dir' + the directory portion of every file in 'files' - will be created if it doesn't already exist. 'mode', 'verbose' and - 'dry_run' flags are as for 'mkpath()'. - """ - # First get the list of directories to create - need_dir = set() - for file in files: - need_dir.add(os.path.join(base_dir, os.path.dirname(file))) - - # Now create them - for dir in sorted(need_dir): - mkpath(dir, mode, verbose=verbose, dry_run=dry_run) - - -def copy_tree( # noqa: C901 - src, - dst, - preserve_mode=1, - preserve_times=1, - preserve_symlinks=0, - update=0, - verbose=1, - dry_run=0, -): - """Copy an entire directory tree 'src' to a new location 'dst'. - - Both 'src' and 'dst' must be directory names. If 'src' is not a - directory, raise DistutilsFileError. If 'dst' does not exist, it is - created with 'mkpath()'. The end result of the copy is that every - file in 'src' is copied to 'dst', and directories under 'src' are - recursively copied to 'dst'. Return the list of files that were - copied or might have been copied, using their output name. The - return value is unaffected by 'update' or 'dry_run': it is simply - the list of all files under 'src', with the names changed to be - under 'dst'. - - 'preserve_mode' and 'preserve_times' are the same as for - 'copy_file'; note that they only apply to regular files, not to - directories. If 'preserve_symlinks' is true, symlinks will be - copied as symlinks (on platforms that support them!); otherwise - (the default), the destination of the symlink will be copied. - 'update' and 'verbose' are the same as for 'copy_file'. - """ - from distutils.file_util import copy_file - - if not dry_run and not os.path.isdir(src): - raise DistutilsFileError("cannot copy tree '%s': not a directory" % src) - try: - names = os.listdir(src) - except OSError as e: - if dry_run: - names = [] - else: - raise DistutilsFileError( - "error listing files in '{}': {}".format(src, e.strerror) - ) - - if not dry_run: - mkpath(dst, verbose=verbose) - - outputs = [] - - for n in names: - src_name = os.path.join(src, n) - dst_name = os.path.join(dst, n) - - if n.startswith('.nfs'): - # skip NFS rename files - continue - - if preserve_symlinks and os.path.islink(src_name): - link_dest = os.readlink(src_name) - if verbose >= 1: - log.info("linking %s -> %s", dst_name, link_dest) - if not dry_run: - os.symlink(link_dest, dst_name) - outputs.append(dst_name) - - elif os.path.isdir(src_name): - outputs.extend( - copy_tree( - src_name, - dst_name, - preserve_mode, - preserve_times, - preserve_symlinks, - update, - verbose=verbose, - dry_run=dry_run, - ) - ) - else: - copy_file( - src_name, - dst_name, - preserve_mode, - preserve_times, - update, - verbose=verbose, - dry_run=dry_run, - ) - outputs.append(dst_name) - - return outputs - - -def _build_cmdtuple(path, cmdtuples): - """Helper for remove_tree().""" - for f in os.listdir(path): - real_f = os.path.join(path, f) - if os.path.isdir(real_f) and not os.path.islink(real_f): - _build_cmdtuple(real_f, cmdtuples) - else: - cmdtuples.append((os.remove, real_f)) - cmdtuples.append((os.rmdir, path)) - - -def remove_tree(directory, verbose=1, dry_run=0): - """Recursively remove an entire directory tree. - - Any errors are ignored (apart from being reported to stdout if 'verbose' - is true). - """ - global _path_created - - if verbose >= 1: - log.info("removing '%s' (and everything under it)", directory) - if dry_run: - return - cmdtuples = [] - _build_cmdtuple(directory, cmdtuples) - for cmd in cmdtuples: - try: - cmd[0](cmd[1]) - # remove dir from cache if it's already there - abspath = os.path.abspath(cmd[1]) - if abspath in _path_created: - del _path_created[abspath] - except OSError as exc: - log.warn("error removing %s: %s", directory, exc) - - -def ensure_relative(path): - """Take the full path 'path', and make it a relative path. - - This is useful to make 'path' the second argument to os.path.join(). - """ - drive, path = os.path.splitdrive(path) - if path[0:1] == os.sep: - path = drive + path[1:] - return path diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_path.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_path.py deleted file mode 100644 index 3767523b784bb93b5b79890eff359628fcfcaa34..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_path.py +++ /dev/null @@ -1,29 +0,0 @@ -import os -from typing import Union - -_Path = Union[str, os.PathLike] - - -def ensure_directory(path): - """Ensure that the parent directory of `path` exists""" - dirname = os.path.dirname(path) - os.makedirs(dirname, exist_ok=True) - - -def same_path(p1: _Path, p2: _Path) -> bool: - """Differs from os.path.samefile because it does not require paths to exist. - Purely string based (no comparison between i-nodes). - >>> same_path("a/b", "./a/b") - True - >>> same_path("a/b", "a/./b") - True - >>> same_path("a/b", "././a/b") - True - >>> same_path("a/b", "./a/b/c/..") - True - >>> same_path("a/b", "../a/b/c") - False - >>> same_path("a", "a/b") - False - """ - return os.path.normpath(p1) == os.path.normpath(p2) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/typing_extensions.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/typing_extensions.py deleted file mode 100644 index 9f1c7aa31e20a7d0ef2e6877ea325c068d50e406..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/typing_extensions.py +++ /dev/null @@ -1,2296 +0,0 @@ -import abc -import collections -import collections.abc -import operator -import sys -import typing - -# After PEP 560, internal typing API was substantially reworked. -# This is especially important for Protocol class which uses internal APIs -# quite extensively. -PEP_560 = sys.version_info[:3] >= (3, 7, 0) - -if PEP_560: - GenericMeta = type -else: - # 3.6 - from typing import GenericMeta, _type_vars # noqa - -# The two functions below are copies of typing internal helpers. -# They are needed by _ProtocolMeta - - -def _no_slots_copy(dct): - dict_copy = dict(dct) - if '__slots__' in dict_copy: - for slot in dict_copy['__slots__']: - dict_copy.pop(slot, None) - return dict_copy - - -def _check_generic(cls, parameters): - if not cls.__parameters__: - raise TypeError(f"{cls} is not a generic class") - alen = len(parameters) - elen = len(cls.__parameters__) - if alen != elen: - raise TypeError(f"Too {'many' if alen > elen else 'few'} arguments for {cls};" - f" actual {alen}, expected {elen}") - - -# Please keep __all__ alphabetized within each category. -__all__ = [ - # Super-special typing primitives. - 'ClassVar', - 'Concatenate', - 'Final', - 'ParamSpec', - 'Self', - 'Type', - - # ABCs (from collections.abc). - 'Awaitable', - 'AsyncIterator', - 'AsyncIterable', - 'Coroutine', - 'AsyncGenerator', - 'AsyncContextManager', - 'ChainMap', - - # Concrete collection types. - 'ContextManager', - 'Counter', - 'Deque', - 'DefaultDict', - 'OrderedDict', - 'TypedDict', - - # Structural checks, a.k.a. protocols. - 'SupportsIndex', - - # One-off things. - 'Annotated', - 'final', - 'IntVar', - 'Literal', - 'NewType', - 'overload', - 'Protocol', - 'runtime', - 'runtime_checkable', - 'Text', - 'TypeAlias', - 'TypeGuard', - 'TYPE_CHECKING', -] - -if PEP_560: - __all__.extend(["get_args", "get_origin", "get_type_hints"]) - -# 3.6.2+ -if hasattr(typing, 'NoReturn'): - NoReturn = typing.NoReturn -# 3.6.0-3.6.1 -else: - class _NoReturn(typing._FinalTypingBase, _root=True): - """Special type indicating functions that never return. - Example:: - - from typing import NoReturn - - def stop() -> NoReturn: - raise Exception('no way') - - This type is invalid in other positions, e.g., ``List[NoReturn]`` - will fail in static type checkers. - """ - __slots__ = () - - def __instancecheck__(self, obj): - raise TypeError("NoReturn cannot be used with isinstance().") - - def __subclasscheck__(self, cls): - raise TypeError("NoReturn cannot be used with issubclass().") - - NoReturn = _NoReturn(_root=True) - -# Some unconstrained type variables. These are used by the container types. -# (These are not for export.) -T = typing.TypeVar('T') # Any type. -KT = typing.TypeVar('KT') # Key type. -VT = typing.TypeVar('VT') # Value type. -T_co = typing.TypeVar('T_co', covariant=True) # Any type covariant containers. -T_contra = typing.TypeVar('T_contra', contravariant=True) # Ditto contravariant. - -ClassVar = typing.ClassVar - -# On older versions of typing there is an internal class named "Final". -# 3.8+ -if hasattr(typing, 'Final') and sys.version_info[:2] >= (3, 7): - Final = typing.Final -# 3.7 -elif sys.version_info[:2] >= (3, 7): - class _FinalForm(typing._SpecialForm, _root=True): - - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only single type') - return typing._GenericAlias(self, (item,)) - - Final = _FinalForm('Final', - doc="""A special typing construct to indicate that a name - cannot be re-assigned or overridden in a subclass. - For example: - - MAX_SIZE: Final = 9000 - MAX_SIZE += 1 # Error reported by type checker - - class Connection: - TIMEOUT: Final[int] = 10 - class FastConnector(Connection): - TIMEOUT = 1 # Error reported by type checker - - There is no runtime checking of these properties.""") -# 3.6 -else: - class _Final(typing._FinalTypingBase, _root=True): - """A special typing construct to indicate that a name - cannot be re-assigned or overridden in a subclass. - For example: - - MAX_SIZE: Final = 9000 - MAX_SIZE += 1 # Error reported by type checker - - class Connection: - TIMEOUT: Final[int] = 10 - class FastConnector(Connection): - TIMEOUT = 1 # Error reported by type checker - - There is no runtime checking of these properties. - """ - - __slots__ = ('__type__',) - - def __init__(self, tp=None, **kwds): - self.__type__ = tp - - def __getitem__(self, item): - cls = type(self) - if self.__type__ is None: - return cls(typing._type_check(item, - f'{cls.__name__[1:]} accepts only single type.'), - _root=True) - raise TypeError(f'{cls.__name__[1:]} cannot be further subscripted') - - def _eval_type(self, globalns, localns): - new_tp = typing._eval_type(self.__type__, globalns, localns) - if new_tp == self.__type__: - return self - return type(self)(new_tp, _root=True) - - def __repr__(self): - r = super().__repr__() - if self.__type__ is not None: - r += f'[{typing._type_repr(self.__type__)}]' - return r - - def __hash__(self): - return hash((type(self).__name__, self.__type__)) - - def __eq__(self, other): - if not isinstance(other, _Final): - return NotImplemented - if self.__type__ is not None: - return self.__type__ == other.__type__ - return self is other - - Final = _Final(_root=True) - - -# 3.8+ -if hasattr(typing, 'final'): - final = typing.final -# 3.6-3.7 -else: - def final(f): - """This decorator can be used to indicate to type checkers that - the decorated method cannot be overridden, and decorated class - cannot be subclassed. For example: - - class Base: - @final - def done(self) -> None: - ... - class Sub(Base): - def done(self) -> None: # Error reported by type checker - ... - @final - class Leaf: - ... - class Other(Leaf): # Error reported by type checker - ... - - There is no runtime checking of these properties. - """ - return f - - -def IntVar(name): - return typing.TypeVar(name) - - -# 3.8+: -if hasattr(typing, 'Literal'): - Literal = typing.Literal -# 3.7: -elif sys.version_info[:2] >= (3, 7): - class _LiteralForm(typing._SpecialForm, _root=True): - - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - return typing._GenericAlias(self, parameters) - - Literal = _LiteralForm('Literal', - doc="""A type that can be used to indicate to type checkers - that the corresponding value has a value literally equivalent - to the provided parameter. For example: - - var: Literal[4] = 4 - - The type checker understands that 'var' is literally equal to - the value 4 and no other value. - - Literal[...] cannot be subclassed. There is no runtime - checking verifying that the parameter is actually a value - instead of a type.""") -# 3.6: -else: - class _Literal(typing._FinalTypingBase, _root=True): - """A type that can be used to indicate to type checkers that the - corresponding value has a value literally equivalent to the - provided parameter. For example: - - var: Literal[4] = 4 - - The type checker understands that 'var' is literally equal to the - value 4 and no other value. - - Literal[...] cannot be subclassed. There is no runtime checking - verifying that the parameter is actually a value instead of a type. - """ - - __slots__ = ('__values__',) - - def __init__(self, values=None, **kwds): - self.__values__ = values - - def __getitem__(self, values): - cls = type(self) - if self.__values__ is None: - if not isinstance(values, tuple): - values = (values,) - return cls(values, _root=True) - raise TypeError(f'{cls.__name__[1:]} cannot be further subscripted') - - def _eval_type(self, globalns, localns): - return self - - def __repr__(self): - r = super().__repr__() - if self.__values__ is not None: - r += f'[{", ".join(map(typing._type_repr, self.__values__))}]' - return r - - def __hash__(self): - return hash((type(self).__name__, self.__values__)) - - def __eq__(self, other): - if not isinstance(other, _Literal): - return NotImplemented - if self.__values__ is not None: - return self.__values__ == other.__values__ - return self is other - - Literal = _Literal(_root=True) - - -_overload_dummy = typing._overload_dummy # noqa -overload = typing.overload - - -# This is not a real generic class. Don't use outside annotations. -Type = typing.Type - -# Various ABCs mimicking those in collections.abc. -# A few are simply re-exported for completeness. - - -class _ExtensionsGenericMeta(GenericMeta): - def __subclasscheck__(self, subclass): - """This mimics a more modern GenericMeta.__subclasscheck__() logic - (that does not have problems with recursion) to work around interactions - between collections, typing, and typing_extensions on older - versions of Python, see https://github.com/python/typing/issues/501. - """ - if self.__origin__ is not None: - if sys._getframe(1).f_globals['__name__'] not in ['abc', 'functools']: - raise TypeError("Parameterized generics cannot be used with class " - "or instance checks") - return False - if not self.__extra__: - return super().__subclasscheck__(subclass) - res = self.__extra__.__subclasshook__(subclass) - if res is not NotImplemented: - return res - if self.__extra__ in subclass.__mro__: - return True - for scls in self.__extra__.__subclasses__(): - if isinstance(scls, GenericMeta): - continue - if issubclass(subclass, scls): - return True - return False - - -Awaitable = typing.Awaitable -Coroutine = typing.Coroutine -AsyncIterable = typing.AsyncIterable -AsyncIterator = typing.AsyncIterator - -# 3.6.1+ -if hasattr(typing, 'Deque'): - Deque = typing.Deque -# 3.6.0 -else: - class Deque(collections.deque, typing.MutableSequence[T], - metaclass=_ExtensionsGenericMeta, - extra=collections.deque): - __slots__ = () - - def __new__(cls, *args, **kwds): - if cls._gorg is Deque: - return collections.deque(*args, **kwds) - return typing._generic_new(collections.deque, cls, *args, **kwds) - -ContextManager = typing.ContextManager -# 3.6.2+ -if hasattr(typing, 'AsyncContextManager'): - AsyncContextManager = typing.AsyncContextManager -# 3.6.0-3.6.1 -else: - from _collections_abc import _check_methods as _check_methods_in_mro # noqa - - class AsyncContextManager(typing.Generic[T_co]): - __slots__ = () - - async def __aenter__(self): - return self - - @abc.abstractmethod - async def __aexit__(self, exc_type, exc_value, traceback): - return None - - @classmethod - def __subclasshook__(cls, C): - if cls is AsyncContextManager: - return _check_methods_in_mro(C, "__aenter__", "__aexit__") - return NotImplemented - -DefaultDict = typing.DefaultDict - -# 3.7.2+ -if hasattr(typing, 'OrderedDict'): - OrderedDict = typing.OrderedDict -# 3.7.0-3.7.2 -elif (3, 7, 0) <= sys.version_info[:3] < (3, 7, 2): - OrderedDict = typing._alias(collections.OrderedDict, (KT, VT)) -# 3.6 -else: - class OrderedDict(collections.OrderedDict, typing.MutableMapping[KT, VT], - metaclass=_ExtensionsGenericMeta, - extra=collections.OrderedDict): - - __slots__ = () - - def __new__(cls, *args, **kwds): - if cls._gorg is OrderedDict: - return collections.OrderedDict(*args, **kwds) - return typing._generic_new(collections.OrderedDict, cls, *args, **kwds) - -# 3.6.2+ -if hasattr(typing, 'Counter'): - Counter = typing.Counter -# 3.6.0-3.6.1 -else: - class Counter(collections.Counter, - typing.Dict[T, int], - metaclass=_ExtensionsGenericMeta, extra=collections.Counter): - - __slots__ = () - - def __new__(cls, *args, **kwds): - if cls._gorg is Counter: - return collections.Counter(*args, **kwds) - return typing._generic_new(collections.Counter, cls, *args, **kwds) - -# 3.6.1+ -if hasattr(typing, 'ChainMap'): - ChainMap = typing.ChainMap -elif hasattr(collections, 'ChainMap'): - class ChainMap(collections.ChainMap, typing.MutableMapping[KT, VT], - metaclass=_ExtensionsGenericMeta, - extra=collections.ChainMap): - - __slots__ = () - - def __new__(cls, *args, **kwds): - if cls._gorg is ChainMap: - return collections.ChainMap(*args, **kwds) - return typing._generic_new(collections.ChainMap, cls, *args, **kwds) - -# 3.6.1+ -if hasattr(typing, 'AsyncGenerator'): - AsyncGenerator = typing.AsyncGenerator -# 3.6.0 -else: - class AsyncGenerator(AsyncIterator[T_co], typing.Generic[T_co, T_contra], - metaclass=_ExtensionsGenericMeta, - extra=collections.abc.AsyncGenerator): - __slots__ = () - -NewType = typing.NewType -Text = typing.Text -TYPE_CHECKING = typing.TYPE_CHECKING - - -def _gorg(cls): - """This function exists for compatibility with old typing versions.""" - assert isinstance(cls, GenericMeta) - if hasattr(cls, '_gorg'): - return cls._gorg - while cls.__origin__ is not None: - cls = cls.__origin__ - return cls - - -_PROTO_WHITELIST = ['Callable', 'Awaitable', - 'Iterable', 'Iterator', 'AsyncIterable', 'AsyncIterator', - 'Hashable', 'Sized', 'Container', 'Collection', 'Reversible', - 'ContextManager', 'AsyncContextManager'] - - -def _get_protocol_attrs(cls): - attrs = set() - for base in cls.__mro__[:-1]: # without object - if base.__name__ in ('Protocol', 'Generic'): - continue - annotations = getattr(base, '__annotations__', {}) - for attr in list(base.__dict__.keys()) + list(annotations.keys()): - if (not attr.startswith('_abc_') and attr not in ( - '__abstractmethods__', '__annotations__', '__weakref__', - '_is_protocol', '_is_runtime_protocol', '__dict__', - '__args__', '__slots__', - '__next_in_mro__', '__parameters__', '__origin__', - '__orig_bases__', '__extra__', '__tree_hash__', - '__doc__', '__subclasshook__', '__init__', '__new__', - '__module__', '_MutableMapping__marker', '_gorg')): - attrs.add(attr) - return attrs - - -def _is_callable_members_only(cls): - return all(callable(getattr(cls, attr, None)) for attr in _get_protocol_attrs(cls)) - - -# 3.8+ -if hasattr(typing, 'Protocol'): - Protocol = typing.Protocol -# 3.7 -elif PEP_560: - from typing import _collect_type_vars # noqa - - def _no_init(self, *args, **kwargs): - if type(self)._is_protocol: - raise TypeError('Protocols cannot be instantiated') - - class _ProtocolMeta(abc.ABCMeta): - # This metaclass is a bit unfortunate and exists only because of the lack - # of __instancehook__. - def __instancecheck__(cls, instance): - # We need this method for situations where attributes are - # assigned in __init__. - if ((not getattr(cls, '_is_protocol', False) or - _is_callable_members_only(cls)) and - issubclass(instance.__class__, cls)): - return True - if cls._is_protocol: - if all(hasattr(instance, attr) and - (not callable(getattr(cls, attr, None)) or - getattr(instance, attr) is not None) - for attr in _get_protocol_attrs(cls)): - return True - return super().__instancecheck__(instance) - - class Protocol(metaclass=_ProtocolMeta): - # There is quite a lot of overlapping code with typing.Generic. - # Unfortunately it is hard to avoid this while these live in two different - # modules. The duplicated code will be removed when Protocol is moved to typing. - """Base class for protocol classes. Protocol classes are defined as:: - - class Proto(Protocol): - def meth(self) -> int: - ... - - Such classes are primarily used with static type checkers that recognize - structural subtyping (static duck-typing), for example:: - - class C: - def meth(self) -> int: - return 0 - - def func(x: Proto) -> int: - return x.meth() - - func(C()) # Passes static type check - - See PEP 544 for details. Protocol classes decorated with - @typing_extensions.runtime act as simple-minded runtime protocol that checks - only the presence of given attributes, ignoring their type signatures. - - Protocol classes can be generic, they are defined as:: - - class GenProto(Protocol[T]): - def meth(self) -> T: - ... - """ - __slots__ = () - _is_protocol = True - - def __new__(cls, *args, **kwds): - if cls is Protocol: - raise TypeError("Type Protocol cannot be instantiated; " - "it can only be used as a base class") - return super().__new__(cls) - - @typing._tp_cache - def __class_getitem__(cls, params): - if not isinstance(params, tuple): - params = (params,) - if not params and cls is not typing.Tuple: - raise TypeError( - f"Parameter list to {cls.__qualname__}[...] cannot be empty") - msg = "Parameters to generic types must be types." - params = tuple(typing._type_check(p, msg) for p in params) # noqa - if cls is Protocol: - # Generic can only be subscripted with unique type variables. - if not all(isinstance(p, typing.TypeVar) for p in params): - i = 0 - while isinstance(params[i], typing.TypeVar): - i += 1 - raise TypeError( - "Parameters to Protocol[...] must all be type variables." - f" Parameter {i + 1} is {params[i]}") - if len(set(params)) != len(params): - raise TypeError( - "Parameters to Protocol[...] must all be unique") - else: - # Subscripting a regular Generic subclass. - _check_generic(cls, params) - return typing._GenericAlias(cls, params) - - def __init_subclass__(cls, *args, **kwargs): - tvars = [] - if '__orig_bases__' in cls.__dict__: - error = typing.Generic in cls.__orig_bases__ - else: - error = typing.Generic in cls.__bases__ - if error: - raise TypeError("Cannot inherit from plain Generic") - if '__orig_bases__' in cls.__dict__: - tvars = _collect_type_vars(cls.__orig_bases__) - # Look for Generic[T1, ..., Tn] or Protocol[T1, ..., Tn]. - # If found, tvars must be a subset of it. - # If not found, tvars is it. - # Also check for and reject plain Generic, - # and reject multiple Generic[...] and/or Protocol[...]. - gvars = None - for base in cls.__orig_bases__: - if (isinstance(base, typing._GenericAlias) and - base.__origin__ in (typing.Generic, Protocol)): - # for error messages - the_base = base.__origin__.__name__ - if gvars is not None: - raise TypeError( - "Cannot inherit from Generic[...]" - " and/or Protocol[...] multiple types.") - gvars = base.__parameters__ - if gvars is None: - gvars = tvars - else: - tvarset = set(tvars) - gvarset = set(gvars) - if not tvarset <= gvarset: - s_vars = ', '.join(str(t) for t in tvars if t not in gvarset) - s_args = ', '.join(str(g) for g in gvars) - raise TypeError(f"Some type variables ({s_vars}) are" - f" not listed in {the_base}[{s_args}]") - tvars = gvars - cls.__parameters__ = tuple(tvars) - - # Determine if this is a protocol or a concrete subclass. - if not cls.__dict__.get('_is_protocol', None): - cls._is_protocol = any(b is Protocol for b in cls.__bases__) - - # Set (or override) the protocol subclass hook. - def _proto_hook(other): - if not cls.__dict__.get('_is_protocol', None): - return NotImplemented - if not getattr(cls, '_is_runtime_protocol', False): - if sys._getframe(2).f_globals['__name__'] in ['abc', 'functools']: - return NotImplemented - raise TypeError("Instance and class checks can only be used with" - " @runtime protocols") - if not _is_callable_members_only(cls): - if sys._getframe(2).f_globals['__name__'] in ['abc', 'functools']: - return NotImplemented - raise TypeError("Protocols with non-method members" - " don't support issubclass()") - if not isinstance(other, type): - # Same error as for issubclass(1, int) - raise TypeError('issubclass() arg 1 must be a class') - for attr in _get_protocol_attrs(cls): - for base in other.__mro__: - if attr in base.__dict__: - if base.__dict__[attr] is None: - return NotImplemented - break - annotations = getattr(base, '__annotations__', {}) - if (isinstance(annotations, typing.Mapping) and - attr in annotations and - isinstance(other, _ProtocolMeta) and - other._is_protocol): - break - else: - return NotImplemented - return True - if '__subclasshook__' not in cls.__dict__: - cls.__subclasshook__ = _proto_hook - - # We have nothing more to do for non-protocols. - if not cls._is_protocol: - return - - # Check consistency of bases. - for base in cls.__bases__: - if not (base in (object, typing.Generic) or - base.__module__ == 'collections.abc' and - base.__name__ in _PROTO_WHITELIST or - isinstance(base, _ProtocolMeta) and base._is_protocol): - raise TypeError('Protocols can only inherit from other' - f' protocols, got {repr(base)}') - cls.__init__ = _no_init -# 3.6 -else: - from typing import _next_in_mro, _type_check # noqa - - def _no_init(self, *args, **kwargs): - if type(self)._is_protocol: - raise TypeError('Protocols cannot be instantiated') - - class _ProtocolMeta(GenericMeta): - """Internal metaclass for Protocol. - - This exists so Protocol classes can be generic without deriving - from Generic. - """ - def __new__(cls, name, bases, namespace, - tvars=None, args=None, origin=None, extra=None, orig_bases=None): - # This is just a version copied from GenericMeta.__new__ that - # includes "Protocol" special treatment. (Comments removed for brevity.) - assert extra is None # Protocols should not have extra - if tvars is not None: - assert origin is not None - assert all(isinstance(t, typing.TypeVar) for t in tvars), tvars - else: - tvars = _type_vars(bases) - gvars = None - for base in bases: - if base is typing.Generic: - raise TypeError("Cannot inherit from plain Generic") - if (isinstance(base, GenericMeta) and - base.__origin__ in (typing.Generic, Protocol)): - if gvars is not None: - raise TypeError( - "Cannot inherit from Generic[...] or" - " Protocol[...] multiple times.") - gvars = base.__parameters__ - if gvars is None: - gvars = tvars - else: - tvarset = set(tvars) - gvarset = set(gvars) - if not tvarset <= gvarset: - s_vars = ", ".join(str(t) for t in tvars if t not in gvarset) - s_args = ", ".join(str(g) for g in gvars) - cls_name = "Generic" if any(b.__origin__ is typing.Generic - for b in bases) else "Protocol" - raise TypeError(f"Some type variables ({s_vars}) are" - f" not listed in {cls_name}[{s_args}]") - tvars = gvars - - initial_bases = bases - if (extra is not None and type(extra) is abc.ABCMeta and - extra not in bases): - bases = (extra,) + bases - bases = tuple(_gorg(b) if isinstance(b, GenericMeta) else b - for b in bases) - if any(isinstance(b, GenericMeta) and b is not typing.Generic for b in bases): - bases = tuple(b for b in bases if b is not typing.Generic) - namespace.update({'__origin__': origin, '__extra__': extra}) - self = super(GenericMeta, cls).__new__(cls, name, bases, namespace, - _root=True) - super(GenericMeta, self).__setattr__('_gorg', - self if not origin else - _gorg(origin)) - self.__parameters__ = tvars - self.__args__ = tuple(... if a is typing._TypingEllipsis else - () if a is typing._TypingEmpty else - a for a in args) if args else None - self.__next_in_mro__ = _next_in_mro(self) - if orig_bases is None: - self.__orig_bases__ = initial_bases - elif origin is not None: - self._abc_registry = origin._abc_registry - self._abc_cache = origin._abc_cache - if hasattr(self, '_subs_tree'): - self.__tree_hash__ = (hash(self._subs_tree()) if origin else - super(GenericMeta, self).__hash__()) - return self - - def __init__(cls, *args, **kwargs): - super().__init__(*args, **kwargs) - if not cls.__dict__.get('_is_protocol', None): - cls._is_protocol = any(b is Protocol or - isinstance(b, _ProtocolMeta) and - b.__origin__ is Protocol - for b in cls.__bases__) - if cls._is_protocol: - for base in cls.__mro__[1:]: - if not (base in (object, typing.Generic) or - base.__module__ == 'collections.abc' and - base.__name__ in _PROTO_WHITELIST or - isinstance(base, typing.TypingMeta) and base._is_protocol or - isinstance(base, GenericMeta) and - base.__origin__ is typing.Generic): - raise TypeError(f'Protocols can only inherit from other' - f' protocols, got {repr(base)}') - - cls.__init__ = _no_init - - def _proto_hook(other): - if not cls.__dict__.get('_is_protocol', None): - return NotImplemented - if not isinstance(other, type): - # Same error as for issubclass(1, int) - raise TypeError('issubclass() arg 1 must be a class') - for attr in _get_protocol_attrs(cls): - for base in other.__mro__: - if attr in base.__dict__: - if base.__dict__[attr] is None: - return NotImplemented - break - annotations = getattr(base, '__annotations__', {}) - if (isinstance(annotations, typing.Mapping) and - attr in annotations and - isinstance(other, _ProtocolMeta) and - other._is_protocol): - break - else: - return NotImplemented - return True - if '__subclasshook__' not in cls.__dict__: - cls.__subclasshook__ = _proto_hook - - def __instancecheck__(self, instance): - # We need this method for situations where attributes are - # assigned in __init__. - if ((not getattr(self, '_is_protocol', False) or - _is_callable_members_only(self)) and - issubclass(instance.__class__, self)): - return True - if self._is_protocol: - if all(hasattr(instance, attr) and - (not callable(getattr(self, attr, None)) or - getattr(instance, attr) is not None) - for attr in _get_protocol_attrs(self)): - return True - return super(GenericMeta, self).__instancecheck__(instance) - - def __subclasscheck__(self, cls): - if self.__origin__ is not None: - if sys._getframe(1).f_globals['__name__'] not in ['abc', 'functools']: - raise TypeError("Parameterized generics cannot be used with class " - "or instance checks") - return False - if (self.__dict__.get('_is_protocol', None) and - not self.__dict__.get('_is_runtime_protocol', None)): - if sys._getframe(1).f_globals['__name__'] in ['abc', - 'functools', - 'typing']: - return False - raise TypeError("Instance and class checks can only be used with" - " @runtime protocols") - if (self.__dict__.get('_is_runtime_protocol', None) and - not _is_callable_members_only(self)): - if sys._getframe(1).f_globals['__name__'] in ['abc', - 'functools', - 'typing']: - return super(GenericMeta, self).__subclasscheck__(cls) - raise TypeError("Protocols with non-method members" - " don't support issubclass()") - return super(GenericMeta, self).__subclasscheck__(cls) - - @typing._tp_cache - def __getitem__(self, params): - # We also need to copy this from GenericMeta.__getitem__ to get - # special treatment of "Protocol". (Comments removed for brevity.) - if not isinstance(params, tuple): - params = (params,) - if not params and _gorg(self) is not typing.Tuple: - raise TypeError( - f"Parameter list to {self.__qualname__}[...] cannot be empty") - msg = "Parameters to generic types must be types." - params = tuple(_type_check(p, msg) for p in params) - if self in (typing.Generic, Protocol): - if not all(isinstance(p, typing.TypeVar) for p in params): - raise TypeError( - f"Parameters to {repr(self)}[...] must all be type variables") - if len(set(params)) != len(params): - raise TypeError( - f"Parameters to {repr(self)}[...] must all be unique") - tvars = params - args = params - elif self in (typing.Tuple, typing.Callable): - tvars = _type_vars(params) - args = params - elif self.__origin__ in (typing.Generic, Protocol): - raise TypeError(f"Cannot subscript already-subscripted {repr(self)}") - else: - _check_generic(self, params) - tvars = _type_vars(params) - args = params - - prepend = (self,) if self.__origin__ is None else () - return self.__class__(self.__name__, - prepend + self.__bases__, - _no_slots_copy(self.__dict__), - tvars=tvars, - args=args, - origin=self, - extra=self.__extra__, - orig_bases=self.__orig_bases__) - - class Protocol(metaclass=_ProtocolMeta): - """Base class for protocol classes. Protocol classes are defined as:: - - class Proto(Protocol): - def meth(self) -> int: - ... - - Such classes are primarily used with static type checkers that recognize - structural subtyping (static duck-typing), for example:: - - class C: - def meth(self) -> int: - return 0 - - def func(x: Proto) -> int: - return x.meth() - - func(C()) # Passes static type check - - See PEP 544 for details. Protocol classes decorated with - @typing_extensions.runtime act as simple-minded runtime protocol that checks - only the presence of given attributes, ignoring their type signatures. - - Protocol classes can be generic, they are defined as:: - - class GenProto(Protocol[T]): - def meth(self) -> T: - ... - """ - __slots__ = () - _is_protocol = True - - def __new__(cls, *args, **kwds): - if _gorg(cls) is Protocol: - raise TypeError("Type Protocol cannot be instantiated; " - "it can be used only as a base class") - return typing._generic_new(cls.__next_in_mro__, cls, *args, **kwds) - - -# 3.8+ -if hasattr(typing, 'runtime_checkable'): - runtime_checkable = typing.runtime_checkable -# 3.6-3.7 -else: - def runtime_checkable(cls): - """Mark a protocol class as a runtime protocol, so that it - can be used with isinstance() and issubclass(). Raise TypeError - if applied to a non-protocol class. - - This allows a simple-minded structural check very similar to the - one-offs in collections.abc such as Hashable. - """ - if not isinstance(cls, _ProtocolMeta) or not cls._is_protocol: - raise TypeError('@runtime_checkable can be only applied to protocol classes,' - f' got {cls!r}') - cls._is_runtime_protocol = True - return cls - - -# Exists for backwards compatibility. -runtime = runtime_checkable - - -# 3.8+ -if hasattr(typing, 'SupportsIndex'): - SupportsIndex = typing.SupportsIndex -# 3.6-3.7 -else: - @runtime_checkable - class SupportsIndex(Protocol): - __slots__ = () - - @abc.abstractmethod - def __index__(self) -> int: - pass - - -if sys.version_info >= (3, 9, 2): - # The standard library TypedDict in Python 3.8 does not store runtime information - # about which (if any) keys are optional. See https://bugs.python.org/issue38834 - # The standard library TypedDict in Python 3.9.0/1 does not honour the "total" - # keyword with old-style TypedDict(). See https://bugs.python.org/issue42059 - TypedDict = typing.TypedDict -else: - def _check_fails(cls, other): - try: - if sys._getframe(1).f_globals['__name__'] not in ['abc', - 'functools', - 'typing']: - # Typed dicts are only for static structural subtyping. - raise TypeError('TypedDict does not support instance and class checks') - except (AttributeError, ValueError): - pass - return False - - def _dict_new(*args, **kwargs): - if not args: - raise TypeError('TypedDict.__new__(): not enough arguments') - _, args = args[0], args[1:] # allow the "cls" keyword be passed - return dict(*args, **kwargs) - - _dict_new.__text_signature__ = '($cls, _typename, _fields=None, /, **kwargs)' - - def _typeddict_new(*args, total=True, **kwargs): - if not args: - raise TypeError('TypedDict.__new__(): not enough arguments') - _, args = args[0], args[1:] # allow the "cls" keyword be passed - if args: - typename, args = args[0], args[1:] # allow the "_typename" keyword be passed - elif '_typename' in kwargs: - typename = kwargs.pop('_typename') - import warnings - warnings.warn("Passing '_typename' as keyword argument is deprecated", - DeprecationWarning, stacklevel=2) - else: - raise TypeError("TypedDict.__new__() missing 1 required positional " - "argument: '_typename'") - if args: - try: - fields, = args # allow the "_fields" keyword be passed - except ValueError: - raise TypeError('TypedDict.__new__() takes from 2 to 3 ' - f'positional arguments but {len(args) + 2} ' - 'were given') - elif '_fields' in kwargs and len(kwargs) == 1: - fields = kwargs.pop('_fields') - import warnings - warnings.warn("Passing '_fields' as keyword argument is deprecated", - DeprecationWarning, stacklevel=2) - else: - fields = None - - if fields is None: - fields = kwargs - elif kwargs: - raise TypeError("TypedDict takes either a dict or keyword arguments," - " but not both") - - ns = {'__annotations__': dict(fields)} - try: - # Setting correct module is necessary to make typed dict classes pickleable. - ns['__module__'] = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - pass - - return _TypedDictMeta(typename, (), ns, total=total) - - _typeddict_new.__text_signature__ = ('($cls, _typename, _fields=None,' - ' /, *, total=True, **kwargs)') - - class _TypedDictMeta(type): - def __init__(cls, name, bases, ns, total=True): - super().__init__(name, bases, ns) - - def __new__(cls, name, bases, ns, total=True): - # Create new typed dict class object. - # This method is called directly when TypedDict is subclassed, - # or via _typeddict_new when TypedDict is instantiated. This way - # TypedDict supports all three syntaxes described in its docstring. - # Subclasses and instances of TypedDict return actual dictionaries - # via _dict_new. - ns['__new__'] = _typeddict_new if name == 'TypedDict' else _dict_new - tp_dict = super().__new__(cls, name, (dict,), ns) - - annotations = {} - own_annotations = ns.get('__annotations__', {}) - own_annotation_keys = set(own_annotations.keys()) - msg = "TypedDict('Name', {f0: t0, f1: t1, ...}); each t must be a type" - own_annotations = { - n: typing._type_check(tp, msg) for n, tp in own_annotations.items() - } - required_keys = set() - optional_keys = set() - - for base in bases: - annotations.update(base.__dict__.get('__annotations__', {})) - required_keys.update(base.__dict__.get('__required_keys__', ())) - optional_keys.update(base.__dict__.get('__optional_keys__', ())) - - annotations.update(own_annotations) - if total: - required_keys.update(own_annotation_keys) - else: - optional_keys.update(own_annotation_keys) - - tp_dict.__annotations__ = annotations - tp_dict.__required_keys__ = frozenset(required_keys) - tp_dict.__optional_keys__ = frozenset(optional_keys) - if not hasattr(tp_dict, '__total__'): - tp_dict.__total__ = total - return tp_dict - - __instancecheck__ = __subclasscheck__ = _check_fails - - TypedDict = _TypedDictMeta('TypedDict', (dict,), {}) - TypedDict.__module__ = __name__ - TypedDict.__doc__ = \ - """A simple typed name space. At runtime it is equivalent to a plain dict. - - TypedDict creates a dictionary type that expects all of its - instances to have a certain set of keys, with each key - associated with a value of a consistent type. This expectation - is not checked at runtime but is only enforced by type checkers. - Usage:: - - class Point2D(TypedDict): - x: int - y: int - label: str - - a: Point2D = {'x': 1, 'y': 2, 'label': 'good'} # OK - b: Point2D = {'z': 3, 'label': 'bad'} # Fails type check - - assert Point2D(x=1, y=2, label='first') == dict(x=1, y=2, label='first') - - The type info can be accessed via the Point2D.__annotations__ dict, and - the Point2D.__required_keys__ and Point2D.__optional_keys__ frozensets. - TypedDict supports two additional equivalent forms:: - - Point2D = TypedDict('Point2D', x=int, y=int, label=str) - Point2D = TypedDict('Point2D', {'x': int, 'y': int, 'label': str}) - - The class syntax is only supported in Python 3.6+, while two other - syntax forms work for Python 2.7 and 3.2+ - """ - - -# Python 3.9+ has PEP 593 (Annotated and modified get_type_hints) -if hasattr(typing, 'Annotated'): - Annotated = typing.Annotated - get_type_hints = typing.get_type_hints - # Not exported and not a public API, but needed for get_origin() and get_args() - # to work. - _AnnotatedAlias = typing._AnnotatedAlias -# 3.7-3.8 -elif PEP_560: - class _AnnotatedAlias(typing._GenericAlias, _root=True): - """Runtime representation of an annotated type. - - At its core 'Annotated[t, dec1, dec2, ...]' is an alias for the type 't' - with extra annotations. The alias behaves like a normal typing alias, - instantiating is the same as instantiating the underlying type, binding - it to types is also the same. - """ - def __init__(self, origin, metadata): - if isinstance(origin, _AnnotatedAlias): - metadata = origin.__metadata__ + metadata - origin = origin.__origin__ - super().__init__(origin, origin) - self.__metadata__ = metadata - - def copy_with(self, params): - assert len(params) == 1 - new_type = params[0] - return _AnnotatedAlias(new_type, self.__metadata__) - - def __repr__(self): - return (f"typing_extensions.Annotated[{typing._type_repr(self.__origin__)}, " - f"{', '.join(repr(a) for a in self.__metadata__)}]") - - def __reduce__(self): - return operator.getitem, ( - Annotated, (self.__origin__,) + self.__metadata__ - ) - - def __eq__(self, other): - if not isinstance(other, _AnnotatedAlias): - return NotImplemented - if self.__origin__ != other.__origin__: - return False - return self.__metadata__ == other.__metadata__ - - def __hash__(self): - return hash((self.__origin__, self.__metadata__)) - - class Annotated: - """Add context specific metadata to a type. - - Example: Annotated[int, runtime_check.Unsigned] indicates to the - hypothetical runtime_check module that this type is an unsigned int. - Every other consumer of this type can ignore this metadata and treat - this type as int. - - The first argument to Annotated must be a valid type (and will be in - the __origin__ field), the remaining arguments are kept as a tuple in - the __extra__ field. - - Details: - - - It's an error to call `Annotated` with less than two arguments. - - Nested Annotated are flattened:: - - Annotated[Annotated[T, Ann1, Ann2], Ann3] == Annotated[T, Ann1, Ann2, Ann3] - - - Instantiating an annotated type is equivalent to instantiating the - underlying type:: - - Annotated[C, Ann1](5) == C(5) - - - Annotated can be used as a generic type alias:: - - Optimized = Annotated[T, runtime.Optimize()] - Optimized[int] == Annotated[int, runtime.Optimize()] - - OptimizedList = Annotated[List[T], runtime.Optimize()] - OptimizedList[int] == Annotated[List[int], runtime.Optimize()] - """ - - __slots__ = () - - def __new__(cls, *args, **kwargs): - raise TypeError("Type Annotated cannot be instantiated.") - - @typing._tp_cache - def __class_getitem__(cls, params): - if not isinstance(params, tuple) or len(params) < 2: - raise TypeError("Annotated[...] should be used " - "with at least two arguments (a type and an " - "annotation).") - msg = "Annotated[t, ...]: t must be a type." - origin = typing._type_check(params[0], msg) - metadata = tuple(params[1:]) - return _AnnotatedAlias(origin, metadata) - - def __init_subclass__(cls, *args, **kwargs): - raise TypeError( - f"Cannot subclass {cls.__module__}.Annotated" - ) - - def _strip_annotations(t): - """Strips the annotations from a given type. - """ - if isinstance(t, _AnnotatedAlias): - return _strip_annotations(t.__origin__) - if isinstance(t, typing._GenericAlias): - stripped_args = tuple(_strip_annotations(a) for a in t.__args__) - if stripped_args == t.__args__: - return t - res = t.copy_with(stripped_args) - res._special = t._special - return res - return t - - def get_type_hints(obj, globalns=None, localns=None, include_extras=False): - """Return type hints for an object. - - This is often the same as obj.__annotations__, but it handles - forward references encoded as string literals, adds Optional[t] if a - default value equal to None is set and recursively replaces all - 'Annotated[T, ...]' with 'T' (unless 'include_extras=True'). - - The argument may be a module, class, method, or function. The annotations - are returned as a dictionary. For classes, annotations include also - inherited members. - - TypeError is raised if the argument is not of a type that can contain - annotations, and an empty dictionary is returned if no annotations are - present. - - BEWARE -- the behavior of globalns and localns is counterintuitive - (unless you are familiar with how eval() and exec() work). The - search order is locals first, then globals. - - - If no dict arguments are passed, an attempt is made to use the - globals from obj (or the respective module's globals for classes), - and these are also used as the locals. If the object does not appear - to have globals, an empty dictionary is used. - - - If one dict argument is passed, it is used for both globals and - locals. - - - If two dict arguments are passed, they specify globals and - locals, respectively. - """ - hint = typing.get_type_hints(obj, globalns=globalns, localns=localns) - if include_extras: - return hint - return {k: _strip_annotations(t) for k, t in hint.items()} -# 3.6 -else: - - def _is_dunder(name): - """Returns True if name is a __dunder_variable_name__.""" - return len(name) > 4 and name.startswith('__') and name.endswith('__') - - # Prior to Python 3.7 types did not have `copy_with`. A lot of the equality - # checks, argument expansion etc. are done on the _subs_tre. As a result we - # can't provide a get_type_hints function that strips out annotations. - - class AnnotatedMeta(typing.GenericMeta): - """Metaclass for Annotated""" - - def __new__(cls, name, bases, namespace, **kwargs): - if any(b is not object for b in bases): - raise TypeError("Cannot subclass " + str(Annotated)) - return super().__new__(cls, name, bases, namespace, **kwargs) - - @property - def __metadata__(self): - return self._subs_tree()[2] - - def _tree_repr(self, tree): - cls, origin, metadata = tree - if not isinstance(origin, tuple): - tp_repr = typing._type_repr(origin) - else: - tp_repr = origin[0]._tree_repr(origin) - metadata_reprs = ", ".join(repr(arg) for arg in metadata) - return f'{cls}[{tp_repr}, {metadata_reprs}]' - - def _subs_tree(self, tvars=None, args=None): # noqa - if self is Annotated: - return Annotated - res = super()._subs_tree(tvars=tvars, args=args) - # Flatten nested Annotated - if isinstance(res[1], tuple) and res[1][0] is Annotated: - sub_tp = res[1][1] - sub_annot = res[1][2] - return (Annotated, sub_tp, sub_annot + res[2]) - return res - - def _get_cons(self): - """Return the class used to create instance of this type.""" - if self.__origin__ is None: - raise TypeError("Cannot get the underlying type of a " - "non-specialized Annotated type.") - tree = self._subs_tree() - while isinstance(tree, tuple) and tree[0] is Annotated: - tree = tree[1] - if isinstance(tree, tuple): - return tree[0] - else: - return tree - - @typing._tp_cache - def __getitem__(self, params): - if not isinstance(params, tuple): - params = (params,) - if self.__origin__ is not None: # specializing an instantiated type - return super().__getitem__(params) - elif not isinstance(params, tuple) or len(params) < 2: - raise TypeError("Annotated[...] should be instantiated " - "with at least two arguments (a type and an " - "annotation).") - else: - msg = "Annotated[t, ...]: t must be a type." - tp = typing._type_check(params[0], msg) - metadata = tuple(params[1:]) - return self.__class__( - self.__name__, - self.__bases__, - _no_slots_copy(self.__dict__), - tvars=_type_vars((tp,)), - # Metadata is a tuple so it won't be touched by _replace_args et al. - args=(tp, metadata), - origin=self, - ) - - def __call__(self, *args, **kwargs): - cons = self._get_cons() - result = cons(*args, **kwargs) - try: - result.__orig_class__ = self - except AttributeError: - pass - return result - - def __getattr__(self, attr): - # For simplicity we just don't relay all dunder names - if self.__origin__ is not None and not _is_dunder(attr): - return getattr(self._get_cons(), attr) - raise AttributeError(attr) - - def __setattr__(self, attr, value): - if _is_dunder(attr) or attr.startswith('_abc_'): - super().__setattr__(attr, value) - elif self.__origin__ is None: - raise AttributeError(attr) - else: - setattr(self._get_cons(), attr, value) - - def __instancecheck__(self, obj): - raise TypeError("Annotated cannot be used with isinstance().") - - def __subclasscheck__(self, cls): - raise TypeError("Annotated cannot be used with issubclass().") - - class Annotated(metaclass=AnnotatedMeta): - """Add context specific metadata to a type. - - Example: Annotated[int, runtime_check.Unsigned] indicates to the - hypothetical runtime_check module that this type is an unsigned int. - Every other consumer of this type can ignore this metadata and treat - this type as int. - - The first argument to Annotated must be a valid type, the remaining - arguments are kept as a tuple in the __metadata__ field. - - Details: - - - It's an error to call `Annotated` with less than two arguments. - - Nested Annotated are flattened:: - - Annotated[Annotated[T, Ann1, Ann2], Ann3] == Annotated[T, Ann1, Ann2, Ann3] - - - Instantiating an annotated type is equivalent to instantiating the - underlying type:: - - Annotated[C, Ann1](5) == C(5) - - - Annotated can be used as a generic type alias:: - - Optimized = Annotated[T, runtime.Optimize()] - Optimized[int] == Annotated[int, runtime.Optimize()] - - OptimizedList = Annotated[List[T], runtime.Optimize()] - OptimizedList[int] == Annotated[List[int], runtime.Optimize()] - """ - -# Python 3.8 has get_origin() and get_args() but those implementations aren't -# Annotated-aware, so we can't use those. Python 3.9's versions don't support -# ParamSpecArgs and ParamSpecKwargs, so only Python 3.10's versions will do. -if sys.version_info[:2] >= (3, 10): - get_origin = typing.get_origin - get_args = typing.get_args -# 3.7-3.9 -elif PEP_560: - try: - # 3.9+ - from typing import _BaseGenericAlias - except ImportError: - _BaseGenericAlias = typing._GenericAlias - try: - # 3.9+ - from typing import GenericAlias - except ImportError: - GenericAlias = typing._GenericAlias - - def get_origin(tp): - """Get the unsubscripted version of a type. - - This supports generic types, Callable, Tuple, Union, Literal, Final, ClassVar - and Annotated. Return None for unsupported types. Examples:: - - get_origin(Literal[42]) is Literal - get_origin(int) is None - get_origin(ClassVar[int]) is ClassVar - get_origin(Generic) is Generic - get_origin(Generic[T]) is Generic - get_origin(Union[T, int]) is Union - get_origin(List[Tuple[T, T]][int]) == list - get_origin(P.args) is P - """ - if isinstance(tp, _AnnotatedAlias): - return Annotated - if isinstance(tp, (typing._GenericAlias, GenericAlias, _BaseGenericAlias, - ParamSpecArgs, ParamSpecKwargs)): - return tp.__origin__ - if tp is typing.Generic: - return typing.Generic - return None - - def get_args(tp): - """Get type arguments with all substitutions performed. - - For unions, basic simplifications used by Union constructor are performed. - Examples:: - get_args(Dict[str, int]) == (str, int) - get_args(int) == () - get_args(Union[int, Union[T, int], str][int]) == (int, str) - get_args(Union[int, Tuple[T, int]][str]) == (int, Tuple[str, int]) - get_args(Callable[[], T][int]) == ([], int) - """ - if isinstance(tp, _AnnotatedAlias): - return (tp.__origin__,) + tp.__metadata__ - if isinstance(tp, (typing._GenericAlias, GenericAlias)): - if getattr(tp, "_special", False): - return () - res = tp.__args__ - if get_origin(tp) is collections.abc.Callable and res[0] is not Ellipsis: - res = (list(res[:-1]), res[-1]) - return res - return () - - -# 3.10+ -if hasattr(typing, 'TypeAlias'): - TypeAlias = typing.TypeAlias -# 3.9 -elif sys.version_info[:2] >= (3, 9): - class _TypeAliasForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - @_TypeAliasForm - def TypeAlias(self, parameters): - """Special marker indicating that an assignment should - be recognized as a proper type alias definition by type - checkers. - - For example:: - - Predicate: TypeAlias = Callable[..., bool] - - It's invalid when used anywhere except as in the example above. - """ - raise TypeError(f"{self} is not subscriptable") -# 3.7-3.8 -elif sys.version_info[:2] >= (3, 7): - class _TypeAliasForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - TypeAlias = _TypeAliasForm('TypeAlias', - doc="""Special marker indicating that an assignment should - be recognized as a proper type alias definition by type - checkers. - - For example:: - - Predicate: TypeAlias = Callable[..., bool] - - It's invalid when used anywhere except as in the example - above.""") -# 3.6 -else: - class _TypeAliasMeta(typing.TypingMeta): - """Metaclass for TypeAlias""" - - def __repr__(self): - return 'typing_extensions.TypeAlias' - - class _TypeAliasBase(typing._FinalTypingBase, metaclass=_TypeAliasMeta, _root=True): - """Special marker indicating that an assignment should - be recognized as a proper type alias definition by type - checkers. - - For example:: - - Predicate: TypeAlias = Callable[..., bool] - - It's invalid when used anywhere except as in the example above. - """ - __slots__ = () - - def __instancecheck__(self, obj): - raise TypeError("TypeAlias cannot be used with isinstance().") - - def __subclasscheck__(self, cls): - raise TypeError("TypeAlias cannot be used with issubclass().") - - def __repr__(self): - return 'typing_extensions.TypeAlias' - - TypeAlias = _TypeAliasBase(_root=True) - - -# Python 3.10+ has PEP 612 -if hasattr(typing, 'ParamSpecArgs'): - ParamSpecArgs = typing.ParamSpecArgs - ParamSpecKwargs = typing.ParamSpecKwargs -# 3.6-3.9 -else: - class _Immutable: - """Mixin to indicate that object should not be copied.""" - __slots__ = () - - def __copy__(self): - return self - - def __deepcopy__(self, memo): - return self - - class ParamSpecArgs(_Immutable): - """The args for a ParamSpec object. - - Given a ParamSpec object P, P.args is an instance of ParamSpecArgs. - - ParamSpecArgs objects have a reference back to their ParamSpec: - - P.args.__origin__ is P - - This type is meant for runtime introspection and has no special meaning to - static type checkers. - """ - def __init__(self, origin): - self.__origin__ = origin - - def __repr__(self): - return f"{self.__origin__.__name__}.args" - - class ParamSpecKwargs(_Immutable): - """The kwargs for a ParamSpec object. - - Given a ParamSpec object P, P.kwargs is an instance of ParamSpecKwargs. - - ParamSpecKwargs objects have a reference back to their ParamSpec: - - P.kwargs.__origin__ is P - - This type is meant for runtime introspection and has no special meaning to - static type checkers. - """ - def __init__(self, origin): - self.__origin__ = origin - - def __repr__(self): - return f"{self.__origin__.__name__}.kwargs" - -# 3.10+ -if hasattr(typing, 'ParamSpec'): - ParamSpec = typing.ParamSpec -# 3.6-3.9 -else: - - # Inherits from list as a workaround for Callable checks in Python < 3.9.2. - class ParamSpec(list): - """Parameter specification variable. - - Usage:: - - P = ParamSpec('P') - - Parameter specification variables exist primarily for the benefit of static - type checkers. They are used to forward the parameter types of one - callable to another callable, a pattern commonly found in higher order - functions and decorators. They are only valid when used in ``Concatenate``, - or s the first argument to ``Callable``. In Python 3.10 and higher, - they are also supported in user-defined Generics at runtime. - See class Generic for more information on generic types. An - example for annotating a decorator:: - - T = TypeVar('T') - P = ParamSpec('P') - - def add_logging(f: Callable[P, T]) -> Callable[P, T]: - '''A type-safe decorator to add logging to a function.''' - def inner(*args: P.args, **kwargs: P.kwargs) -> T: - logging.info(f'{f.__name__} was called') - return f(*args, **kwargs) - return inner - - @add_logging - def add_two(x: float, y: float) -> float: - '''Add two numbers together.''' - return x + y - - Parameter specification variables defined with covariant=True or - contravariant=True can be used to declare covariant or contravariant - generic types. These keyword arguments are valid, but their actual semantics - are yet to be decided. See PEP 612 for details. - - Parameter specification variables can be introspected. e.g.: - - P.__name__ == 'T' - P.__bound__ == None - P.__covariant__ == False - P.__contravariant__ == False - - Note that only parameter specification variables defined in global scope can - be pickled. - """ - - # Trick Generic __parameters__. - __class__ = typing.TypeVar - - @property - def args(self): - return ParamSpecArgs(self) - - @property - def kwargs(self): - return ParamSpecKwargs(self) - - def __init__(self, name, *, bound=None, covariant=False, contravariant=False): - super().__init__([self]) - self.__name__ = name - self.__covariant__ = bool(covariant) - self.__contravariant__ = bool(contravariant) - if bound: - self.__bound__ = typing._type_check(bound, 'Bound must be a type.') - else: - self.__bound__ = None - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - - def __repr__(self): - if self.__covariant__: - prefix = '+' - elif self.__contravariant__: - prefix = '-' - else: - prefix = '~' - return prefix + self.__name__ - - def __hash__(self): - return object.__hash__(self) - - def __eq__(self, other): - return self is other - - def __reduce__(self): - return self.__name__ - - # Hack to get typing._type_check to pass. - def __call__(self, *args, **kwargs): - pass - - if not PEP_560: - # Only needed in 3.6. - def _get_type_vars(self, tvars): - if self not in tvars: - tvars.append(self) - - -# 3.6-3.9 -if not hasattr(typing, 'Concatenate'): - # Inherits from list as a workaround for Callable checks in Python < 3.9.2. - class _ConcatenateGenericAlias(list): - - # Trick Generic into looking into this for __parameters__. - if PEP_560: - __class__ = typing._GenericAlias - else: - __class__ = typing._TypingBase - - # Flag in 3.8. - _special = False - # Attribute in 3.6 and earlier. - _gorg = typing.Generic - - def __init__(self, origin, args): - super().__init__(args) - self.__origin__ = origin - self.__args__ = args - - def __repr__(self): - _type_repr = typing._type_repr - return (f'{_type_repr(self.__origin__)}' - f'[{", ".join(_type_repr(arg) for arg in self.__args__)}]') - - def __hash__(self): - return hash((self.__origin__, self.__args__)) - - # Hack to get typing._type_check to pass in Generic. - def __call__(self, *args, **kwargs): - pass - - @property - def __parameters__(self): - return tuple( - tp for tp in self.__args__ if isinstance(tp, (typing.TypeVar, ParamSpec)) - ) - - if not PEP_560: - # Only required in 3.6. - def _get_type_vars(self, tvars): - if self.__origin__ and self.__parameters__: - typing._get_type_vars(self.__parameters__, tvars) - - -# 3.6-3.9 -@typing._tp_cache -def _concatenate_getitem(self, parameters): - if parameters == (): - raise TypeError("Cannot take a Concatenate of no types.") - if not isinstance(parameters, tuple): - parameters = (parameters,) - if not isinstance(parameters[-1], ParamSpec): - raise TypeError("The last parameter to Concatenate should be a " - "ParamSpec variable.") - msg = "Concatenate[arg, ...]: each arg must be a type." - parameters = tuple(typing._type_check(p, msg) for p in parameters) - return _ConcatenateGenericAlias(self, parameters) - - -# 3.10+ -if hasattr(typing, 'Concatenate'): - Concatenate = typing.Concatenate - _ConcatenateGenericAlias = typing._ConcatenateGenericAlias # noqa -# 3.9 -elif sys.version_info[:2] >= (3, 9): - @_TypeAliasForm - def Concatenate(self, parameters): - """Used in conjunction with ``ParamSpec`` and ``Callable`` to represent a - higher order function which adds, removes or transforms parameters of a - callable. - - For example:: - - Callable[Concatenate[int, P], int] - - See PEP 612 for detailed information. - """ - return _concatenate_getitem(self, parameters) -# 3.7-8 -elif sys.version_info[:2] >= (3, 7): - class _ConcatenateForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - return _concatenate_getitem(self, parameters) - - Concatenate = _ConcatenateForm( - 'Concatenate', - doc="""Used in conjunction with ``ParamSpec`` and ``Callable`` to represent a - higher order function which adds, removes or transforms parameters of a - callable. - - For example:: - - Callable[Concatenate[int, P], int] - - See PEP 612 for detailed information. - """) -# 3.6 -else: - class _ConcatenateAliasMeta(typing.TypingMeta): - """Metaclass for Concatenate.""" - - def __repr__(self): - return 'typing_extensions.Concatenate' - - class _ConcatenateAliasBase(typing._FinalTypingBase, - metaclass=_ConcatenateAliasMeta, - _root=True): - """Used in conjunction with ``ParamSpec`` and ``Callable`` to represent a - higher order function which adds, removes or transforms parameters of a - callable. - - For example:: - - Callable[Concatenate[int, P], int] - - See PEP 612 for detailed information. - """ - __slots__ = () - - def __instancecheck__(self, obj): - raise TypeError("Concatenate cannot be used with isinstance().") - - def __subclasscheck__(self, cls): - raise TypeError("Concatenate cannot be used with issubclass().") - - def __repr__(self): - return 'typing_extensions.Concatenate' - - def __getitem__(self, parameters): - return _concatenate_getitem(self, parameters) - - Concatenate = _ConcatenateAliasBase(_root=True) - -# 3.10+ -if hasattr(typing, 'TypeGuard'): - TypeGuard = typing.TypeGuard -# 3.9 -elif sys.version_info[:2] >= (3, 9): - class _TypeGuardForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - @_TypeGuardForm - def TypeGuard(self, parameters): - """Special typing form used to annotate the return type of a user-defined - type guard function. ``TypeGuard`` only accepts a single type argument. - At runtime, functions marked this way should return a boolean. - - ``TypeGuard`` aims to benefit *type narrowing* -- a technique used by static - type checkers to determine a more precise type of an expression within a - program's code flow. Usually type narrowing is done by analyzing - conditional code flow and applying the narrowing to a block of code. The - conditional expression here is sometimes referred to as a "type guard". - - Sometimes it would be convenient to use a user-defined boolean function - as a type guard. Such a function should use ``TypeGuard[...]`` as its - return type to alert static type checkers to this intention. - - Using ``-> TypeGuard`` tells the static type checker that for a given - function: - - 1. The return value is a boolean. - 2. If the return value is ``True``, the type of its argument - is the type inside ``TypeGuard``. - - For example:: - - def is_str(val: Union[str, float]): - # "isinstance" type guard - if isinstance(val, str): - # Type of ``val`` is narrowed to ``str`` - ... - else: - # Else, type of ``val`` is narrowed to ``float``. - ... - - Strict type narrowing is not enforced -- ``TypeB`` need not be a narrower - form of ``TypeA`` (it can even be a wider form) and this may lead to - type-unsafe results. The main reason is to allow for things like - narrowing ``List[object]`` to ``List[str]`` even though the latter is not - a subtype of the former, since ``List`` is invariant. The responsibility of - writing type-safe type guards is left to the user. - - ``TypeGuard`` also works with type variables. For more information, see - PEP 647 (User-Defined Type Guards). - """ - item = typing._type_check(parameters, f'{self} accepts only single type.') - return typing._GenericAlias(self, (item,)) -# 3.7-3.8 -elif sys.version_info[:2] >= (3, 7): - class _TypeGuardForm(typing._SpecialForm, _root=True): - - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only a single type') - return typing._GenericAlias(self, (item,)) - - TypeGuard = _TypeGuardForm( - 'TypeGuard', - doc="""Special typing form used to annotate the return type of a user-defined - type guard function. ``TypeGuard`` only accepts a single type argument. - At runtime, functions marked this way should return a boolean. - - ``TypeGuard`` aims to benefit *type narrowing* -- a technique used by static - type checkers to determine a more precise type of an expression within a - program's code flow. Usually type narrowing is done by analyzing - conditional code flow and applying the narrowing to a block of code. The - conditional expression here is sometimes referred to as a "type guard". - - Sometimes it would be convenient to use a user-defined boolean function - as a type guard. Such a function should use ``TypeGuard[...]`` as its - return type to alert static type checkers to this intention. - - Using ``-> TypeGuard`` tells the static type checker that for a given - function: - - 1. The return value is a boolean. - 2. If the return value is ``True``, the type of its argument - is the type inside ``TypeGuard``. - - For example:: - - def is_str(val: Union[str, float]): - # "isinstance" type guard - if isinstance(val, str): - # Type of ``val`` is narrowed to ``str`` - ... - else: - # Else, type of ``val`` is narrowed to ``float``. - ... - - Strict type narrowing is not enforced -- ``TypeB`` need not be a narrower - form of ``TypeA`` (it can even be a wider form) and this may lead to - type-unsafe results. The main reason is to allow for things like - narrowing ``List[object]`` to ``List[str]`` even though the latter is not - a subtype of the former, since ``List`` is invariant. The responsibility of - writing type-safe type guards is left to the user. - - ``TypeGuard`` also works with type variables. For more information, see - PEP 647 (User-Defined Type Guards). - """) -# 3.6 -else: - class _TypeGuard(typing._FinalTypingBase, _root=True): - """Special typing form used to annotate the return type of a user-defined - type guard function. ``TypeGuard`` only accepts a single type argument. - At runtime, functions marked this way should return a boolean. - - ``TypeGuard`` aims to benefit *type narrowing* -- a technique used by static - type checkers to determine a more precise type of an expression within a - program's code flow. Usually type narrowing is done by analyzing - conditional code flow and applying the narrowing to a block of code. The - conditional expression here is sometimes referred to as a "type guard". - - Sometimes it would be convenient to use a user-defined boolean function - as a type guard. Such a function should use ``TypeGuard[...]`` as its - return type to alert static type checkers to this intention. - - Using ``-> TypeGuard`` tells the static type checker that for a given - function: - - 1. The return value is a boolean. - 2. If the return value is ``True``, the type of its argument - is the type inside ``TypeGuard``. - - For example:: - - def is_str(val: Union[str, float]): - # "isinstance" type guard - if isinstance(val, str): - # Type of ``val`` is narrowed to ``str`` - ... - else: - # Else, type of ``val`` is narrowed to ``float``. - ... - - Strict type narrowing is not enforced -- ``TypeB`` need not be a narrower - form of ``TypeA`` (it can even be a wider form) and this may lead to - type-unsafe results. The main reason is to allow for things like - narrowing ``List[object]`` to ``List[str]`` even though the latter is not - a subtype of the former, since ``List`` is invariant. The responsibility of - writing type-safe type guards is left to the user. - - ``TypeGuard`` also works with type variables. For more information, see - PEP 647 (User-Defined Type Guards). - """ - - __slots__ = ('__type__',) - - def __init__(self, tp=None, **kwds): - self.__type__ = tp - - def __getitem__(self, item): - cls = type(self) - if self.__type__ is None: - return cls(typing._type_check(item, - f'{cls.__name__[1:]} accepts only a single type.'), - _root=True) - raise TypeError(f'{cls.__name__[1:]} cannot be further subscripted') - - def _eval_type(self, globalns, localns): - new_tp = typing._eval_type(self.__type__, globalns, localns) - if new_tp == self.__type__: - return self - return type(self)(new_tp, _root=True) - - def __repr__(self): - r = super().__repr__() - if self.__type__ is not None: - r += f'[{typing._type_repr(self.__type__)}]' - return r - - def __hash__(self): - return hash((type(self).__name__, self.__type__)) - - def __eq__(self, other): - if not isinstance(other, _TypeGuard): - return NotImplemented - if self.__type__ is not None: - return self.__type__ == other.__type__ - return self is other - - TypeGuard = _TypeGuard(_root=True) - -if hasattr(typing, "Self"): - Self = typing.Self -elif sys.version_info[:2] >= (3, 7): - # Vendored from cpython typing._SpecialFrom - class _SpecialForm(typing._Final, _root=True): - __slots__ = ('_name', '__doc__', '_getitem') - - def __init__(self, getitem): - self._getitem = getitem - self._name = getitem.__name__ - self.__doc__ = getitem.__doc__ - - def __getattr__(self, item): - if item in {'__name__', '__qualname__'}: - return self._name - - raise AttributeError(item) - - def __mro_entries__(self, bases): - raise TypeError(f"Cannot subclass {self!r}") - - def __repr__(self): - return f'typing_extensions.{self._name}' - - def __reduce__(self): - return self._name - - def __call__(self, *args, **kwds): - raise TypeError(f"Cannot instantiate {self!r}") - - def __or__(self, other): - return typing.Union[self, other] - - def __ror__(self, other): - return typing.Union[other, self] - - def __instancecheck__(self, obj): - raise TypeError(f"{self} cannot be used with isinstance()") - - def __subclasscheck__(self, cls): - raise TypeError(f"{self} cannot be used with issubclass()") - - @typing._tp_cache - def __getitem__(self, parameters): - return self._getitem(self, parameters) - - @_SpecialForm - def Self(self, params): - """Used to spell the type of "self" in classes. - - Example:: - - from typing import Self - - class ReturnsSelf: - def parse(self, data: bytes) -> Self: - ... - return self - - """ - - raise TypeError(f"{self} is not subscriptable") -else: - class _Self(typing._FinalTypingBase, _root=True): - """Used to spell the type of "self" in classes. - - Example:: - - from typing import Self - - class ReturnsSelf: - def parse(self, data: bytes) -> Self: - ... - return self - - """ - - __slots__ = () - - def __instancecheck__(self, obj): - raise TypeError(f"{self} cannot be used with isinstance().") - - def __subclasscheck__(self, cls): - raise TypeError(f"{self} cannot be used with issubclass().") - - Self = _Self(_root=True) - - -if hasattr(typing, 'Required'): - Required = typing.Required - NotRequired = typing.NotRequired -elif sys.version_info[:2] >= (3, 9): - class _ExtensionsSpecialForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - @_ExtensionsSpecialForm - def Required(self, parameters): - """A special typing construct to mark a key of a total=False TypedDict - as required. For example: - - class Movie(TypedDict, total=False): - title: Required[str] - year: int - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - - There is no runtime checking that a required key is actually provided - when instantiating a related TypedDict. - """ - item = typing._type_check(parameters, f'{self._name} accepts only single type') - return typing._GenericAlias(self, (item,)) - - @_ExtensionsSpecialForm - def NotRequired(self, parameters): - """A special typing construct to mark a key of a TypedDict as - potentially missing. For example: - - class Movie(TypedDict): - title: str - year: NotRequired[int] - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - """ - item = typing._type_check(parameters, f'{self._name} accepts only single type') - return typing._GenericAlias(self, (item,)) - -elif sys.version_info[:2] >= (3, 7): - class _RequiredForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - '{} accepts only single type'.format(self._name)) - return typing._GenericAlias(self, (item,)) - - Required = _RequiredForm( - 'Required', - doc="""A special typing construct to mark a key of a total=False TypedDict - as required. For example: - - class Movie(TypedDict, total=False): - title: Required[str] - year: int - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - - There is no runtime checking that a required key is actually provided - when instantiating a related TypedDict. - """) - NotRequired = _RequiredForm( - 'NotRequired', - doc="""A special typing construct to mark a key of a TypedDict as - potentially missing. For example: - - class Movie(TypedDict): - title: str - year: NotRequired[int] - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - """) -else: - # NOTE: Modeled after _Final's implementation when _FinalTypingBase available - class _MaybeRequired(typing._FinalTypingBase, _root=True): - __slots__ = ('__type__',) - - def __init__(self, tp=None, **kwds): - self.__type__ = tp - - def __getitem__(self, item): - cls = type(self) - if self.__type__ is None: - return cls(typing._type_check(item, - '{} accepts only single type.'.format(cls.__name__[1:])), - _root=True) - raise TypeError('{} cannot be further subscripted' - .format(cls.__name__[1:])) - - def _eval_type(self, globalns, localns): - new_tp = typing._eval_type(self.__type__, globalns, localns) - if new_tp == self.__type__: - return self - return type(self)(new_tp, _root=True) - - def __repr__(self): - r = super().__repr__() - if self.__type__ is not None: - r += '[{}]'.format(typing._type_repr(self.__type__)) - return r - - def __hash__(self): - return hash((type(self).__name__, self.__type__)) - - def __eq__(self, other): - if not isinstance(other, type(self)): - return NotImplemented - if self.__type__ is not None: - return self.__type__ == other.__type__ - return self is other - - class _Required(_MaybeRequired, _root=True): - """A special typing construct to mark a key of a total=False TypedDict - as required. For example: - - class Movie(TypedDict, total=False): - title: Required[str] - year: int - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - - There is no runtime checking that a required key is actually provided - when instantiating a related TypedDict. - """ - - class _NotRequired(_MaybeRequired, _root=True): - """A special typing construct to mark a key of a TypedDict as - potentially missing. For example: - - class Movie(TypedDict): - title: str - year: NotRequired[int] - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - """ - - Required = _Required(_root=True) - NotRequired = _NotRequired(_root=True) diff --git a/spaces/Atualli/node-media-server/app.js b/spaces/Atualli/node-media-server/app.js deleted file mode 100644 index 56bd1b94edbd59824dbab8da12b0fc76afd50920..0000000000000000000000000000000000000000 --- a/spaces/Atualli/node-media-server/app.js +++ /dev/null @@ -1,18 +0,0 @@ -const NodeMediaServer = require('node-media-server'); - -const config = { - rtmp: { - port: 7861, - chunk_size: 60000, - gop_cache: true, - ping: 30, - ping_timeout: 60 - }, - http: { - port: 7860, - allow_origin: '*' - } -}; - -var nms = new NodeMediaServer(config) -nms.run(); \ No newline at end of file diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/config/instantiate.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/config/instantiate.py deleted file mode 100644 index cbb32e19ea518eee84941b20f58d1054e84d1937..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/config/instantiate.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import dataclasses -import logging -from collections import abc -from typing import Any - -from detectron2.utils.registry import _convert_target_to_string, locate - -__all__ = ["dump_dataclass", "instantiate"] - - -def dump_dataclass(obj: Any): - """ - Dump a dataclass recursively into a dict that can be later instantiated. - - Args: - obj: a dataclass object - - Returns: - dict - """ - assert dataclasses.is_dataclass(obj) and not isinstance( - obj, type - ), "dump_dataclass() requires an instance of a dataclass." - ret = {"_target_": _convert_target_to_string(type(obj))} - for f in dataclasses.fields(obj): - v = getattr(obj, f.name) - if dataclasses.is_dataclass(v): - v = dump_dataclass(v) - if isinstance(v, (list, tuple)): - v = [dump_dataclass(x) if dataclasses.is_dataclass(x) else x for x in v] - ret[f.name] = v - return ret - - -def instantiate(cfg): - """ - Recursively instantiate objects defined in dictionaries by - "_target_" and arguments. - - Args: - cfg: a dict-like object with "_target_" that defines the caller, and - other keys that define the arguments - - Returns: - object instantiated by cfg - """ - from omegaconf import ListConfig - - if isinstance(cfg, ListConfig): - lst = [instantiate(x) for x in cfg] - return ListConfig(lst, flags={"allow_objects": True}) - if isinstance(cfg, list): - # Specialize for list, because many classes take - # list[objects] as arguments, such as ResNet, DatasetMapper - return [instantiate(x) for x in cfg] - - if isinstance(cfg, abc.Mapping) and "_target_" in cfg: - # conceptually equivalent to hydra.utils.instantiate(cfg) with _convert_=all, - # but faster: https://github.com/facebookresearch/hydra/issues/1200 - cfg = {k: instantiate(v) for k, v in cfg.items()} - cls = cfg.pop("_target_") - cls = instantiate(cls) - - if isinstance(cls, str): - cls_name = cls - cls = locate(cls_name) - assert cls is not None, cls_name - else: - try: - cls_name = cls.__module__ + "." + cls.__qualname__ - except Exception: - # target could be anything, so the above could fail - cls_name = str(cls) - assert callable(cls), f"_target_ {cls} does not define a callable object" - try: - return cls(**cfg) - except TypeError: - logger = logging.getLogger(__name__) - logger.error(f"Error when instantiating {cls_name}!") - raise - return cfg # return as-is if don't know what to do diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/mask_ops.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/mask_ops.py deleted file mode 100644 index e7a9f3a323ddbe75845b668ee6b40c5385d206c3..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/mask_ops.py +++ /dev/null @@ -1,275 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -from typing import Tuple -import torch -from PIL import Image -from torch.nn import functional as F - -__all__ = ["paste_masks_in_image"] - - -BYTES_PER_FLOAT = 4 -# TODO: This memory limit may be too much or too little. It would be better to -# determine it based on available resources. -GPU_MEM_LIMIT = 1024 ** 3 # 1 GB memory limit - - -def _do_paste_mask(masks, boxes, img_h: int, img_w: int, skip_empty: bool = True): - """ - Args: - masks: N, 1, H, W - boxes: N, 4 - img_h, img_w (int): - skip_empty (bool): only paste masks within the region that - tightly bound all boxes, and returns the results this region only. - An important optimization for CPU. - - Returns: - if skip_empty == False, a mask of shape (N, img_h, img_w) - if skip_empty == True, a mask of shape (N, h', w'), and the slice - object for the corresponding region. - """ - # On GPU, paste all masks together (up to chunk size) - # by using the entire image to sample the masks - # Compared to pasting them one by one, - # this has more operations but is faster on COCO-scale dataset. - device = masks.device - - if skip_empty and not torch.jit.is_scripting(): - x0_int, y0_int = torch.clamp(boxes.min(dim=0).values.floor()[:2] - 1, min=0).to( - dtype=torch.int32 - ) - x1_int = torch.clamp(boxes[:, 2].max().ceil() + 1, max=img_w).to(dtype=torch.int32) - y1_int = torch.clamp(boxes[:, 3].max().ceil() + 1, max=img_h).to(dtype=torch.int32) - else: - x0_int, y0_int = 0, 0 - x1_int, y1_int = img_w, img_h - x0, y0, x1, y1 = torch.split(boxes, 1, dim=1) # each is Nx1 - - N = masks.shape[0] - - img_y = torch.arange(y0_int, y1_int, device=device, dtype=torch.float32) + 0.5 - img_x = torch.arange(x0_int, x1_int, device=device, dtype=torch.float32) + 0.5 - img_y = (img_y - y0) / (y1 - y0) * 2 - 1 - img_x = (img_x - x0) / (x1 - x0) * 2 - 1 - # img_x, img_y have shapes (N, w), (N, h) - - gx = img_x[:, None, :].expand(N, img_y.size(1), img_x.size(1)) - gy = img_y[:, :, None].expand(N, img_y.size(1), img_x.size(1)) - grid = torch.stack([gx, gy], dim=3) - - if not torch.jit.is_scripting(): - if not masks.dtype.is_floating_point: - masks = masks.float() - img_masks = F.grid_sample(masks, grid.to(masks.dtype), align_corners=False) - - if skip_empty and not torch.jit.is_scripting(): - return img_masks[:, 0], (slice(y0_int, y1_int), slice(x0_int, x1_int)) - else: - return img_masks[:, 0], () - - -# Annotate boxes as Tensor (but not Boxes) in order to use scripting -@torch.jit.script_if_tracing -def paste_masks_in_image( - masks: torch.Tensor, boxes: torch.Tensor, image_shape: Tuple[int, int], threshold: float = 0.5 -): - """ - Paste a set of masks that are of a fixed resolution (e.g., 28 x 28) into an image. - The location, height, and width for pasting each mask is determined by their - corresponding bounding boxes in boxes. - - Note: - This is a complicated but more accurate implementation. In actual deployment, it is - often enough to use a faster but less accurate implementation. - See :func:`paste_mask_in_image_old` in this file for an alternative implementation. - - Args: - masks (tensor): Tensor of shape (Bimg, Hmask, Wmask), where Bimg is the number of - detected object instances in the image and Hmask, Wmask are the mask width and mask - height of the predicted mask (e.g., Hmask = Wmask = 28). Values are in [0, 1]. - boxes (Boxes or Tensor): A Boxes of length Bimg or Tensor of shape (Bimg, 4). - boxes[i] and masks[i] correspond to the same object instance. - image_shape (tuple): height, width - threshold (float): A threshold in [0, 1] for converting the (soft) masks to - binary masks. - - Returns: - img_masks (Tensor): A tensor of shape (Bimg, Himage, Wimage), where Bimg is the - number of detected object instances and Himage, Wimage are the image width - and height. img_masks[i] is a binary mask for object instance i. - """ - - assert masks.shape[-1] == masks.shape[-2], "Only square mask predictions are supported" - N = len(masks) - if N == 0: - return masks.new_empty((0,) + image_shape, dtype=torch.uint8) - if not isinstance(boxes, torch.Tensor): - boxes = boxes.tensor - device = boxes.device - assert len(boxes) == N, boxes.shape - - img_h, img_w = image_shape - - # The actual implementation split the input into chunks, - # and paste them chunk by chunk. - if device.type == "cpu" or torch.jit.is_scripting(): - # CPU is most efficient when they are pasted one by one with skip_empty=True - # so that it performs minimal number of operations. - num_chunks = N - else: - # GPU benefits from parallelism for larger chunks, but may have memory issue - # int(img_h) because shape may be tensors in tracing - num_chunks = int(np.ceil(N * int(img_h) * int(img_w) * BYTES_PER_FLOAT / GPU_MEM_LIMIT)) - assert ( - num_chunks <= N - ), "Default GPU_MEM_LIMIT in mask_ops.py is too small; try increasing it" - chunks = torch.chunk(torch.arange(N, device=device), num_chunks) - - img_masks = torch.zeros( - N, img_h, img_w, device=device, dtype=torch.bool if threshold >= 0 else torch.uint8 - ) - for inds in chunks: - masks_chunk, spatial_inds = _do_paste_mask( - masks[inds, None, :, :], boxes[inds], img_h, img_w, skip_empty=device.type == "cpu" - ) - - if threshold >= 0: - masks_chunk = (masks_chunk >= threshold).to(dtype=torch.bool) - else: - # for visualization and debugging - masks_chunk = (masks_chunk * 255).to(dtype=torch.uint8) - - if torch.jit.is_scripting(): # Scripting does not use the optimized codepath - img_masks[inds] = masks_chunk - else: - img_masks[(inds,) + spatial_inds] = masks_chunk - return img_masks - - -# The below are the original paste function (from Detectron1) which has -# larger quantization error. -# It is faster on CPU, while the aligned one is faster on GPU thanks to grid_sample. - - -def paste_mask_in_image_old(mask, box, img_h, img_w, threshold): - """ - Paste a single mask in an image. - This is a per-box implementation of :func:`paste_masks_in_image`. - This function has larger quantization error due to incorrect pixel - modeling and is not used any more. - - Args: - mask (Tensor): A tensor of shape (Hmask, Wmask) storing the mask of a single - object instance. Values are in [0, 1]. - box (Tensor): A tensor of shape (4, ) storing the x0, y0, x1, y1 box corners - of the object instance. - img_h, img_w (int): Image height and width. - threshold (float): Mask binarization threshold in [0, 1]. - - Returns: - im_mask (Tensor): - The resized and binarized object mask pasted into the original - image plane (a tensor of shape (img_h, img_w)). - """ - # Conversion from continuous box coordinates to discrete pixel coordinates - # via truncation (cast to int32). This determines which pixels to paste the - # mask onto. - box = box.to(dtype=torch.int32) # Continuous to discrete coordinate conversion - # An example (1D) box with continuous coordinates (x0=0.7, x1=4.3) will map to - # a discrete coordinates (x0=0, x1=4). Note that box is mapped to 5 = x1 - x0 + 1 - # pixels (not x1 - x0 pixels). - samples_w = box[2] - box[0] + 1 # Number of pixel samples, *not* geometric width - samples_h = box[3] - box[1] + 1 # Number of pixel samples, *not* geometric height - - # Resample the mask from it's original grid to the new samples_w x samples_h grid - mask = Image.fromarray(mask.cpu().numpy()) - mask = mask.resize((samples_w, samples_h), resample=Image.BILINEAR) - mask = np.array(mask, copy=False) - - if threshold >= 0: - mask = np.array(mask > threshold, dtype=np.uint8) - mask = torch.from_numpy(mask) - else: - # for visualization and debugging, we also - # allow it to return an unmodified mask - mask = torch.from_numpy(mask * 255).to(torch.uint8) - - im_mask = torch.zeros((img_h, img_w), dtype=torch.uint8) - x_0 = max(box[0], 0) - x_1 = min(box[2] + 1, img_w) - y_0 = max(box[1], 0) - y_1 = min(box[3] + 1, img_h) - - im_mask[y_0:y_1, x_0:x_1] = mask[ - (y_0 - box[1]) : (y_1 - box[1]), (x_0 - box[0]) : (x_1 - box[0]) - ] - return im_mask - - -# Our pixel modeling requires extrapolation for any continuous -# coordinate < 0.5 or > length - 0.5. When sampling pixels on the masks, -# we would like this extrapolation to be an interpolation between boundary values and zero, -# instead of using absolute zero or boundary values. -# Therefore `paste_mask_in_image_old` is often used with zero padding around the masks like this: -# masks, scale = pad_masks(masks[:, 0, :, :], 1) -# boxes = scale_boxes(boxes.tensor, scale) - - -def pad_masks(masks, padding): - """ - Args: - masks (tensor): A tensor of shape (B, M, M) representing B masks. - padding (int): Number of cells to pad on all sides. - - Returns: - The padded masks and the scale factor of the padding size / original size. - """ - B = masks.shape[0] - M = masks.shape[-1] - pad2 = 2 * padding - scale = float(M + pad2) / M - padded_masks = masks.new_zeros((B, M + pad2, M + pad2)) - padded_masks[:, padding:-padding, padding:-padding] = masks - return padded_masks, scale - - -def scale_boxes(boxes, scale): - """ - Args: - boxes (tensor): A tensor of shape (B, 4) representing B boxes with 4 - coords representing the corners x0, y0, x1, y1, - scale (float): The box scaling factor. - - Returns: - Scaled boxes. - """ - w_half = (boxes[:, 2] - boxes[:, 0]) * 0.5 - h_half = (boxes[:, 3] - boxes[:, 1]) * 0.5 - x_c = (boxes[:, 2] + boxes[:, 0]) * 0.5 - y_c = (boxes[:, 3] + boxes[:, 1]) * 0.5 - - w_half *= scale - h_half *= scale - - scaled_boxes = torch.zeros_like(boxes) - scaled_boxes[:, 0] = x_c - w_half - scaled_boxes[:, 2] = x_c + w_half - scaled_boxes[:, 1] = y_c - h_half - scaled_boxes[:, 3] = y_c + h_half - return scaled_boxes - - -@torch.jit.script_if_tracing -def _paste_masks_tensor_shape( - masks: torch.Tensor, - boxes: torch.Tensor, - image_shape: Tuple[torch.Tensor, torch.Tensor], - threshold: float = 0.5, -): - """ - A wrapper of paste_masks_in_image where image_shape is Tensor. - During tracing, shapes might be tensors instead of ints. The Tensor->int - conversion should be scripted rather than traced. - """ - return paste_masks_in_image(masks, boxes, (int(image_shape[0]), int(image_shape[1])), threshold) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_model_e2e.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_model_e2e.py deleted file mode 100644 index 5da35205eba60c739b8a919121f4e9a85a24138b..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/modeling/test_model_e2e.py +++ /dev/null @@ -1,223 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - - -import itertools -import unittest -from contextlib import contextmanager -from copy import deepcopy -import torch - -from detectron2.structures import BitMasks, Boxes, ImageList, Instances -from detectron2.utils.events import EventStorage -from detectron2.utils.testing import get_model_no_weights - - -@contextmanager -def typecheck_hook(model, *, in_dtype=None, out_dtype=None): - """ - Check that the model must be called with the given input/output dtype - """ - if not isinstance(in_dtype, set): - in_dtype = {in_dtype} - if not isinstance(out_dtype, set): - out_dtype = {out_dtype} - - def flatten(x): - if isinstance(x, torch.Tensor): - return [x] - if isinstance(x, (list, tuple)): - return list(itertools.chain(*[flatten(t) for t in x])) - if isinstance(x, dict): - return flatten(list(x.values())) - return [] - - def hook(module, input, output): - if in_dtype is not None: - dtypes = {x.dtype for x in flatten(input)} - assert ( - dtypes == in_dtype - ), f"Expected input dtype of {type(module)} is {in_dtype}. Got {dtypes} instead!" - - if out_dtype is not None: - dtypes = {x.dtype for x in flatten(output)} - assert ( - dtypes == out_dtype - ), f"Expected output dtype of {type(module)} is {out_dtype}. Got {dtypes} instead!" - - with model.register_forward_hook(hook): - yield - - -def create_model_input(img, inst=None): - if inst is not None: - return {"image": img, "instances": inst} - else: - return {"image": img} - - -def get_empty_instance(h, w): - inst = Instances((h, w)) - inst.gt_boxes = Boxes(torch.rand(0, 4)) - inst.gt_classes = torch.tensor([]).to(dtype=torch.int64) - inst.gt_masks = BitMasks(torch.rand(0, h, w)) - return inst - - -def get_regular_bitmask_instances(h, w): - inst = Instances((h, w)) - inst.gt_boxes = Boxes(torch.rand(3, 4)) - inst.gt_boxes.tensor[:, 2:] += inst.gt_boxes.tensor[:, :2] - inst.gt_classes = torch.tensor([3, 4, 5]).to(dtype=torch.int64) - inst.gt_masks = BitMasks((torch.rand(3, h, w) > 0.5)) - return inst - - -class InstanceModelE2ETest: - def setUp(self): - torch.manual_seed(43) - self.model = get_model_no_weights(self.CONFIG_PATH) - - def _test_eval(self, input_sizes): - inputs = [create_model_input(torch.rand(3, s[0], s[1])) for s in input_sizes] - self.model.eval() - self.model(inputs) - - def _test_train(self, input_sizes, instances): - assert len(input_sizes) == len(instances) - inputs = [ - create_model_input(torch.rand(3, s[0], s[1]), inst) - for s, inst in zip(input_sizes, instances) - ] - self.model.train() - with EventStorage(): - losses = self.model(inputs) - sum(losses.values()).backward() - del losses - - def _inf_tensor(self, *shape): - return 1.0 / torch.zeros(*shape, device=self.model.device) - - def _nan_tensor(self, *shape): - return torch.zeros(*shape, device=self.model.device).fill_(float("nan")) - - def test_empty_data(self): - instances = [get_empty_instance(200, 250), get_empty_instance(200, 249)] - self._test_eval([(200, 250), (200, 249)]) - self._test_train([(200, 250), (200, 249)], instances) - - @unittest.skipIf(not torch.cuda.is_available(), "CUDA unavailable") - def test_eval_tocpu(self): - model = deepcopy(self.model).cpu() - model.eval() - input_sizes = [(200, 250), (200, 249)] - inputs = [create_model_input(torch.rand(3, s[0], s[1])) for s in input_sizes] - model(inputs) - - -class MaskRCNNE2ETest(InstanceModelE2ETest, unittest.TestCase): - CONFIG_PATH = "COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml" - - def test_half_empty_data(self): - instances = [get_empty_instance(200, 250), get_regular_bitmask_instances(200, 249)] - self._test_train([(200, 250), (200, 249)], instances) - - # This test is flaky because in some environment the output features are zero due to relu - # def test_rpn_inf_nan_data(self): - # self.model.eval() - # for tensor in [self._inf_tensor, self._nan_tensor]: - # images = ImageList(tensor(1, 3, 512, 512), [(510, 510)]) - # features = { - # "p2": tensor(1, 256, 256, 256), - # "p3": tensor(1, 256, 128, 128), - # "p4": tensor(1, 256, 64, 64), - # "p5": tensor(1, 256, 32, 32), - # "p6": tensor(1, 256, 16, 16), - # } - # props, _ = self.model.proposal_generator(images, features) - # self.assertEqual(len(props[0]), 0) - - def test_roiheads_inf_nan_data(self): - self.model.eval() - for tensor in [self._inf_tensor, self._nan_tensor]: - images = ImageList(tensor(1, 3, 512, 512), [(510, 510)]) - features = { - "p2": tensor(1, 256, 256, 256), - "p3": tensor(1, 256, 128, 128), - "p4": tensor(1, 256, 64, 64), - "p5": tensor(1, 256, 32, 32), - "p6": tensor(1, 256, 16, 16), - } - props = [Instances((510, 510))] - props[0].proposal_boxes = Boxes([[10, 10, 20, 20]]).to(device=self.model.device) - props[0].objectness_logits = torch.tensor([1.0]).reshape(1, 1) - det, _ = self.model.roi_heads(images, features, props) - self.assertEqual(len(det[0]), 0) - - @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available") - def test_autocast(self): - from torch.cuda.amp import autocast - - inputs = [{"image": torch.rand(3, 100, 100)}] - self.model.eval() - with autocast(), typecheck_hook( - self.model.backbone, in_dtype=torch.float32, out_dtype=torch.float16 - ), typecheck_hook( - self.model.roi_heads.box_predictor, in_dtype=torch.float16, out_dtype=torch.float16 - ): - out = self.model.inference(inputs, do_postprocess=False)[0] - self.assertEqual(out.pred_boxes.tensor.dtype, torch.float32) - self.assertEqual(out.pred_masks.dtype, torch.float16) - self.assertEqual(out.scores.dtype, torch.float32) # scores comes from softmax - - -class RetinaNetE2ETest(InstanceModelE2ETest, unittest.TestCase): - CONFIG_PATH = "COCO-Detection/retinanet_R_50_FPN_1x.yaml" - - def test_inf_nan_data(self): - self.model.eval() - self.model.score_threshold = -999999999 - for tensor in [self._inf_tensor, self._nan_tensor]: - images = ImageList(tensor(1, 3, 512, 512), [(510, 510)]) - features = [ - tensor(1, 256, 128, 128), - tensor(1, 256, 64, 64), - tensor(1, 256, 32, 32), - tensor(1, 256, 16, 16), - tensor(1, 256, 8, 8), - ] - pred_logits, pred_anchor_deltas = self.model.head(features) - pred_logits = [tensor(*x.shape) for x in pred_logits] - pred_anchor_deltas = [tensor(*x.shape) for x in pred_anchor_deltas] - det = self.model.forward_inference(images, features, [pred_logits, pred_anchor_deltas]) - # all predictions (if any) are infinite or nan - if len(det[0]): - self.assertTrue(torch.isfinite(det[0].pred_boxes.tensor).sum() == 0) - - @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available") - def test_autocast(self): - from torch.cuda.amp import autocast - - inputs = [{"image": torch.rand(3, 100, 100)}] - self.model.eval() - with autocast(), typecheck_hook( - self.model.backbone, in_dtype=torch.float32, out_dtype=torch.float16 - ), typecheck_hook(self.model.head, in_dtype=torch.float16, out_dtype=torch.float16): - out = self.model(inputs)[0]["instances"] - self.assertEqual(out.pred_boxes.tensor.dtype, torch.float32) - self.assertEqual(out.scores.dtype, torch.float16) - - -class SemSegE2ETest(unittest.TestCase): - CONFIG_PATH = "Misc/semantic_R_50_FPN_1x.yaml" - - def setUp(self): - torch.manual_seed(43) - self.model = get_model_no_weights(self.CONFIG_PATH) - - def _test_eval(self, input_sizes): - inputs = [create_model_input(torch.rand(3, s[0], s[1])) for s in input_sizes] - self.model.eval() - self.model(inputs) - - def test_forward(self): - self._test_eval([(200, 250), (200, 249)]) diff --git a/spaces/Bart92/RVC_HF/julius/bands.py b/spaces/Bart92/RVC_HF/julius/bands.py deleted file mode 100644 index ef2162440b69e960770aa7bf81b9aaec48a63243..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/julius/bands.py +++ /dev/null @@ -1,119 +0,0 @@ -# File under the MIT license, see https://github.com/adefossez/julius/LICENSE for details. -# Author: adefossez, 2020 -""" -Decomposition of a signal over frequency bands in the waveform domain. -""" -from typing import Optional, Sequence -import torch - -from .core import mel_frequencies -from .lowpass import LowPassFilters -from .utils import simple_repr - - -class SplitBands(torch.nn.Module): - """ - Decomposes a signal over the given frequency bands in the waveform domain using - a cascade of low pass filters as implemented by `julius.lowpass.LowPassFilters`. - You can either specify explicitely the frequency cutoffs, or just the number of bands, - in which case the frequency cutoffs will be spread out evenly in mel scale. - - Args: - sample_rate (float): Sample rate of the input signal in Hz. - n_bands (int or None): number of bands, when not giving them explictely with `cutoffs`. - In that case, the cutoff frequencies will be evenly spaced in mel-space. - cutoffs (list[float] or None): list of frequency cutoffs in Hz. - pad (bool): if True, appropriately pad the input with zero over the edge. If `stride=1`, - the output will have the same length as the input. - zeros (float): Number of zero crossings to keep. See `LowPassFilters` for more informations. - fft (bool or None): See `LowPassFilters` for more info. - - ..note:: - The sum of all the bands will always be the input signal. - - ..warning:: - Unlike `julius.lowpass.LowPassFilters`, the cutoffs frequencies must be provided in Hz along - with the sample rate. - - Shape: - - - Input: `[*, T]` - - Output: `[B, *, T']`, with `T'=T` if `pad` is True. - If `n_bands` was provided, `B = n_bands` otherwise `B = len(cutoffs) + 1` - - >>> bands = SplitBands(sample_rate=128, n_bands=10) - >>> x = torch.randn(6, 4, 1024) - >>> list(bands(x).shape) - [10, 6, 4, 1024] - """ - - def __init__(self, sample_rate: float, n_bands: Optional[int] = None, - cutoffs: Optional[Sequence[float]] = None, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - super().__init__() - if (cutoffs is None) + (n_bands is None) != 1: - raise ValueError("You must provide either n_bands, or cutoffs, but not boths.") - - self.sample_rate = sample_rate - self.n_bands = n_bands - self._cutoffs = list(cutoffs) if cutoffs is not None else None - self.pad = pad - self.zeros = zeros - self.fft = fft - - if cutoffs is None: - if n_bands is None: - raise ValueError("You must provide one of n_bands or cutoffs.") - if not n_bands >= 1: - raise ValueError(f"n_bands must be greater than one (got {n_bands})") - cutoffs = mel_frequencies(n_bands + 1, 0, sample_rate / 2)[1:-1] - else: - if max(cutoffs) > 0.5 * sample_rate: - raise ValueError("A cutoff above sample_rate/2 does not make sense.") - if len(cutoffs) > 0: - self.lowpass = LowPassFilters( - [c / sample_rate for c in cutoffs], pad=pad, zeros=zeros, fft=fft) - else: - # Here I cannot make both TorchScript and MyPy happy. - # I miss the good old times, before all this madness was created. - self.lowpass = None # type: ignore - - def forward(self, input): - if self.lowpass is None: - return input[None] - lows = self.lowpass(input) - low = lows[0] - bands = [low] - for low_and_band in lows[1:]: - # Get a bandpass filter by substracting lowpasses - band = low_and_band - low - bands.append(band) - low = low_and_band - # Last band is whatever is left in the signal - bands.append(input - low) - return torch.stack(bands) - - @property - def cutoffs(self): - if self._cutoffs is not None: - return self._cutoffs - elif self.lowpass is not None: - return [c * self.sample_rate for c in self.lowpass.cutoffs] - else: - return [] - - def __repr__(self): - return simple_repr(self, overrides={"cutoffs": self._cutoffs}) - - -def split_bands(signal: torch.Tensor, sample_rate: float, n_bands: Optional[int] = None, - cutoffs: Optional[Sequence[float]] = None, pad: bool = True, - zeros: float = 8, fft: Optional[bool] = None): - """ - Functional version of `SplitBands`, refer to this class for more information. - - >>> x = torch.randn(6, 4, 1024) - >>> list(split_bands(x, sample_rate=64, cutoffs=[12, 24]).shape) - [3, 6, 4, 1024] - """ - return SplitBands(sample_rate, n_bands, cutoffs, pad, zeros, fft).to(signal)(signal) diff --git a/spaces/Bart92/RVC_HF/train/losses.py b/spaces/Bart92/RVC_HF/train/losses.py deleted file mode 100644 index b89038f14d06d7fae43628183e9ffb465e4edafd..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/train/losses.py +++ /dev/null @@ -1,59 +0,0 @@ -import torch -from torch.nn import functional as F - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg**2) - loss += r_loss + g_loss - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p) ** 2) * torch.exp(-2.0 * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/Benson/text-generation/Examples/Descargar Garena Drifters Velocidad.md b/spaces/Benson/text-generation/Examples/Descargar Garena Drifters Velocidad.md deleted file mode 100644 index 0fcad0177ac2d9480a2d4e840cbc4f03f82e05bb..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Garena Drifters Velocidad.md +++ /dev/null @@ -1,73 +0,0 @@ - -

Descargar Garena AOV Mod dinero ilimitado: Cómo obtener la mejor experiencia MOBA en su dispositivo móvil

-

Si eres un fan de los juegos multijugador online battle arena (MOBA), es posible que hayas oído hablar de Garena AOV, uno de los juegos más populares y emocionantes de este género. Pero ¿sabía usted que puede descargar Garena AOV mod dinero ilimitado y obtener acceso a características premium, contenido y recursos que mejorarán su experiencia de juego? En este artículo, te contaremos todo lo que necesitas saber sobre Garena AOV, por qué deberías descargar su versión mod y cómo hacerlo de forma segura y fácil.

-

¿Qué es Garena AOV?

-

Garena AOV es un nuevo juego 5v5 MOBA que fue desarrollado por Tencent Games y publicado por Garena. También es conocida como Arena del Valor o Reino del Valor en algunas regiones. El juego cuenta con gráficos ultra-HD, jugabilidad suave, héroes equilibrados y varios modos para adaptarse a diferentes preferencias y niveles de habilidad. Puedes elegir entre más de 100 héroes, cada uno con sus propias habilidades, roles y estilos. También puedes hacer equipo con tus amigos u otros jugadores en línea y competir en partidos clasificados, partidos casuales o eventos especiales. El juego es gratis para descargar y jugar, pero también ofrece compras en la aplicación para algunos artículos y servicios.

-

descargar garena drifters velocidad


Download Zip »»» https://bltlly.com/2v6ICS



-

Características de Garena AOV

-

Algunas de las características que hacen que Garena AOV se destaque de otros juegos de MOBA son:

- -

Beneficios de jugar Garena AOV

-

Jugar a Garena AOV puede traerte muchos beneficios, como:

- -

¿Por qué descargar Garena AOV mod dinero ilimitado?

-

Como mencionamos anteriormente, Garena AOV es gratis para descargar y jugar, pero también tiene algunas compras en la aplicación que pueden mejorar su experiencia de juego. Por ejemplo, puedes comprar gemas, vales, monedas de oro, cofres de arcanos, cofres de héroes, cofres de piel, etc. Estos artículos pueden ayudarte a desbloquear nuevos héroes, pieles, conjuntos de arcanos, talentos, etc. Sin embargo, estos artículos no son baratos y pueden costar mucho dinero real. No todos pueden permitirse gastar tanto dinero en un juego, especialmente si tienen un presupuesto ajustado o tienen otras prioridades. Es por eso que algunas personas buscan maneras de obtener estos artículos de forma gratuita o a un costo más bajo. Una de las maneras de hacer eso es descargar Garena AOV mod ilimitado dinero.

-

Ventajas de usar Garena AOV mod unlimited money

-

Garena AOV mod unlimited money es una versión modificada del juego original que te da acceso a gemas ilimitadas, vales, monedas de oro y otros recursos. Con este mod, puedes:

- -

Los riesgos de usar Garena AOV mod dinero ilimitado

-

Sin embargo, el uso de Garena AOV mod ilimitado dinero también viene con algunos riesgos y desventajas que usted debe ser consciente de antes de descargarlo. Algunos de ellos son:

- -

Cómo descargar e instalar Garena AOV mod dinero ilimitado?

-

Si todavía desea descargar e instalar Garena AOV mod dinero ilimitado a pesar de los riesgos, es necesario seguir algunos pasos con cuidado y cautela. Estos son los pasos que debes seguir:

-

Paso 1: Encontrar una fuente confiable para el archivo apk mod

- -

Paso 2: Habilitar fuentes desconocidas en la configuración del dispositivo

-

Lo siguiente que debe hacer es habilitar fuentes desconocidas en la configuración del dispositivo. Esto le permitirá instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, vaya a la configuración del dispositivo > seguridad > fuentes desconocidas > activar. También es posible que tenga que desactivar cualquier software antivirus o firewall que pueda bloquear o interferir con el proceso de instalación.

-

-

Paso 3: Descargar e instalar el archivo apk mod

-

La tercera cosa que necesita hacer es descargar e instalar el archivo apk mod en su dispositivo. Para hacer esto, vaya a la página web donde encontró el mod y haga clic en el botón de descarga. Espere a que termine la descarga y luego localice el archivo en el almacenamiento del dispositivo. Toque en el archivo y siga las instrucciones en la pantalla para instalarlo. Es posible que necesite conceder algunos permisos o aceptar algunos términos y condiciones durante el proceso de instalación.

-

Paso 4: Iniciar el juego y disfrutar del dinero ilimitado

-

Lo último que tienes que hacer es lanzar el juego y disfrutar del dinero ilimitado. Para ello, abre el juego desde el cajón de la aplicación o la pantalla de inicio e inicia sesión con tu cuenta. Deberías ver que tienes gemas ilimitadas, vales, monedas de oro y otros recursos en tu cuenta. Ahora puedes usarlos para comprar lo que quieras de la tienda o desbloquear cualquier héroe o piel que te guste. Diviértete jugando Garena AOV con tus amigos u otros jugadores en línea!

-

Conclusión

- -

Preguntas frecuentes

-

Aquí hay algunas preguntas frecuentes sobre Garena AOV mod unlimited money:

-
    -
  1. ¿Es Garena AOV dinero ilimitado legal?
  2. -

    No, Garena AOV mod dinero ilimitado no es legal. Es una versión modificada del juego original que viola los términos y condiciones del juego y sus desarrolladores. Usar este mod puede hacer que te prohíban jugar el juego o enfrentar acciones legales de las autoridades.

    -
  3. ¿Es seguro el dinero ilimitado Garena AOV mod?
  4. -

    No necesariamente. Garena AOV mod dinero ilimitado puede contener virus, malware, spyware, u otros programas dañinos que pueden dañar su dispositivo o robar su información personal. Siempre debe escanear el archivo apk mod con un software antivirus de buena reputación antes de descargar e instalar. También debe hacer copias de seguridad de sus datos y utilizar una cuenta secundaria para jugar el juego con este mod.

    -
  5. ¿Garena AOV mod es dinero ilimitado gratis?
  6. -

    Sí, Garena AOV mod dinero ilimitado es gratis para descargar y usar. Sin embargo, algunos sitios web pueden pedirle que complete encuestas, ofertas o tareas antes de darle el enlace de descarga. Usted debe evitar estos sitios web, ya que pueden ser estafas o intentos de phishing. También debe tener cuidado con los costos ocultos o limitaciones que pueden venir con este mod.

    -
  7. ¿Cómo puedo actualizar Garena AOV mod unlimited money?
  8. -

    Puede actualizar Garena AOV mod dinero ilimitado siguiendo los mismos pasos que descargarlo e instalarlo. Sin embargo, siempre debes comprobar si el mod es compatible con la última versión del juego o con tu dispositivo antes de actualizarlo. También debe hacer una copia de seguridad de sus datos y desinstalar la versión anterior del mod antes de instalar el nuevo.

    -
  9. ¿Dónde puedo encontrar más información sobre Garena AOV?
  10. - -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/lexer.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/lexer.py deleted file mode 100644 index 74ab9b9088fa6af68976545ffc1ba94c3e9685ca..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/lexer.py +++ /dev/null @@ -1,883 +0,0 @@ -""" - pygments.lexer - ~~~~~~~~~~~~~~ - - Base lexer classes. - - :copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re -import sys -import time - -from pip._vendor.pygments.filter import apply_filters, Filter -from pip._vendor.pygments.filters import get_filter_by_name -from pip._vendor.pygments.token import Error, Text, Other, Whitespace, _TokenType -from pip._vendor.pygments.util import get_bool_opt, get_int_opt, get_list_opt, \ - make_analysator, Future, guess_decode -from pip._vendor.pygments.regexopt import regex_opt - -__all__ = ['Lexer', 'RegexLexer', 'ExtendedRegexLexer', 'DelegatingLexer', - 'LexerContext', 'include', 'inherit', 'bygroups', 'using', 'this', - 'default', 'words', 'line_re'] - -line_re = re.compile('.*?\n') - -_encoding_map = [(b'\xef\xbb\xbf', 'utf-8'), - (b'\xff\xfe\0\0', 'utf-32'), - (b'\0\0\xfe\xff', 'utf-32be'), - (b'\xff\xfe', 'utf-16'), - (b'\xfe\xff', 'utf-16be')] - -_default_analyse = staticmethod(lambda x: 0.0) - - -class LexerMeta(type): - """ - This metaclass automagically converts ``analyse_text`` methods into - static methods which always return float values. - """ - - def __new__(mcs, name, bases, d): - if 'analyse_text' in d: - d['analyse_text'] = make_analysator(d['analyse_text']) - return type.__new__(mcs, name, bases, d) - - -class Lexer(metaclass=LexerMeta): - """ - Lexer for a specific language. - - Basic options recognized: - ``stripnl`` - Strip leading and trailing newlines from the input (default: True). - ``stripall`` - Strip all leading and trailing whitespace from the input - (default: False). - ``ensurenl`` - Make sure that the input ends with a newline (default: True). This - is required for some lexers that consume input linewise. - - .. versionadded:: 1.3 - - ``tabsize`` - If given and greater than 0, expand tabs in the input (default: 0). - ``encoding`` - If given, must be an encoding name. This encoding will be used to - convert the input string to Unicode, if it is not already a Unicode - string (default: ``'guess'``, which uses a simple UTF-8 / Locale / - Latin1 detection. Can also be ``'chardet'`` to use the chardet - library, if it is installed. - ``inencoding`` - Overrides the ``encoding`` if given. - """ - - #: Name of the lexer - name = None - - #: URL of the language specification/definition - url = None - - #: Shortcuts for the lexer - aliases = [] - - #: File name globs - filenames = [] - - #: Secondary file name globs - alias_filenames = [] - - #: MIME types - mimetypes = [] - - #: Priority, should multiple lexers match and no content is provided - priority = 0 - - def __init__(self, **options): - self.options = options - self.stripnl = get_bool_opt(options, 'stripnl', True) - self.stripall = get_bool_opt(options, 'stripall', False) - self.ensurenl = get_bool_opt(options, 'ensurenl', True) - self.tabsize = get_int_opt(options, 'tabsize', 0) - self.encoding = options.get('encoding', 'guess') - self.encoding = options.get('inencoding') or self.encoding - self.filters = [] - for filter_ in get_list_opt(options, 'filters', ()): - self.add_filter(filter_) - - def __repr__(self): - if self.options: - return '' % (self.__class__.__name__, - self.options) - else: - return '' % self.__class__.__name__ - - def add_filter(self, filter_, **options): - """ - Add a new stream filter to this lexer. - """ - if not isinstance(filter_, Filter): - filter_ = get_filter_by_name(filter_, **options) - self.filters.append(filter_) - - def analyse_text(text): - """ - Has to return a float between ``0`` and ``1`` that indicates - if a lexer wants to highlight this text. Used by ``guess_lexer``. - If this method returns ``0`` it won't highlight it in any case, if - it returns ``1`` highlighting with this lexer is guaranteed. - - The `LexerMeta` metaclass automatically wraps this function so - that it works like a static method (no ``self`` or ``cls`` - parameter) and the return value is automatically converted to - `float`. If the return value is an object that is boolean `False` - it's the same as if the return values was ``0.0``. - """ - - def get_tokens(self, text, unfiltered=False): - """ - Return an iterable of (tokentype, value) pairs generated from - `text`. If `unfiltered` is set to `True`, the filtering mechanism - is bypassed even if filters are defined. - - Also preprocess the text, i.e. expand tabs and strip it if - wanted and applies registered filters. - """ - if not isinstance(text, str): - if self.encoding == 'guess': - text, _ = guess_decode(text) - elif self.encoding == 'chardet': - try: - from pip._vendor import chardet - except ImportError as e: - raise ImportError('To enable chardet encoding guessing, ' - 'please install the chardet library ' - 'from http://chardet.feedparser.org/') from e - # check for BOM first - decoded = None - for bom, encoding in _encoding_map: - if text.startswith(bom): - decoded = text[len(bom):].decode(encoding, 'replace') - break - # no BOM found, so use chardet - if decoded is None: - enc = chardet.detect(text[:1024]) # Guess using first 1KB - decoded = text.decode(enc.get('encoding') or 'utf-8', - 'replace') - text = decoded - else: - text = text.decode(self.encoding) - if text.startswith('\ufeff'): - text = text[len('\ufeff'):] - else: - if text.startswith('\ufeff'): - text = text[len('\ufeff'):] - - # text now *is* a unicode string - text = text.replace('\r\n', '\n') - text = text.replace('\r', '\n') - if self.stripall: - text = text.strip() - elif self.stripnl: - text = text.strip('\n') - if self.tabsize > 0: - text = text.expandtabs(self.tabsize) - if self.ensurenl and not text.endswith('\n'): - text += '\n' - - def streamer(): - for _, t, v in self.get_tokens_unprocessed(text): - yield t, v - stream = streamer() - if not unfiltered: - stream = apply_filters(stream, self.filters, self) - return stream - - def get_tokens_unprocessed(self, text): - """ - Return an iterable of (index, tokentype, value) pairs where "index" - is the starting position of the token within the input text. - - In subclasses, implement this method as a generator to - maximize effectiveness. - """ - raise NotImplementedError - - -class DelegatingLexer(Lexer): - """ - This lexer takes two lexer as arguments. A root lexer and - a language lexer. First everything is scanned using the language - lexer, afterwards all ``Other`` tokens are lexed using the root - lexer. - - The lexers from the ``template`` lexer package use this base lexer. - """ - - def __init__(self, _root_lexer, _language_lexer, _needle=Other, **options): - self.root_lexer = _root_lexer(**options) - self.language_lexer = _language_lexer(**options) - self.needle = _needle - Lexer.__init__(self, **options) - - def get_tokens_unprocessed(self, text): - buffered = '' - insertions = [] - lng_buffer = [] - for i, t, v in self.language_lexer.get_tokens_unprocessed(text): - if t is self.needle: - if lng_buffer: - insertions.append((len(buffered), lng_buffer)) - lng_buffer = [] - buffered += v - else: - lng_buffer.append((i, t, v)) - if lng_buffer: - insertions.append((len(buffered), lng_buffer)) - return do_insertions(insertions, - self.root_lexer.get_tokens_unprocessed(buffered)) - - -# ------------------------------------------------------------------------------ -# RegexLexer and ExtendedRegexLexer -# - - -class include(str): # pylint: disable=invalid-name - """ - Indicates that a state should include rules from another state. - """ - pass - - -class _inherit: - """ - Indicates the a state should inherit from its superclass. - """ - def __repr__(self): - return 'inherit' - -inherit = _inherit() # pylint: disable=invalid-name - - -class combined(tuple): # pylint: disable=invalid-name - """ - Indicates a state combined from multiple states. - """ - - def __new__(cls, *args): - return tuple.__new__(cls, args) - - def __init__(self, *args): - # tuple.__init__ doesn't do anything - pass - - -class _PseudoMatch: - """ - A pseudo match object constructed from a string. - """ - - def __init__(self, start, text): - self._text = text - self._start = start - - def start(self, arg=None): - return self._start - - def end(self, arg=None): - return self._start + len(self._text) - - def group(self, arg=None): - if arg: - raise IndexError('No such group') - return self._text - - def groups(self): - return (self._text,) - - def groupdict(self): - return {} - - -def bygroups(*args): - """ - Callback that yields multiple actions for each group in the match. - """ - def callback(lexer, match, ctx=None): - for i, action in enumerate(args): - if action is None: - continue - elif type(action) is _TokenType: - data = match.group(i + 1) - if data: - yield match.start(i + 1), action, data - else: - data = match.group(i + 1) - if data is not None: - if ctx: - ctx.pos = match.start(i + 1) - for item in action(lexer, - _PseudoMatch(match.start(i + 1), data), ctx): - if item: - yield item - if ctx: - ctx.pos = match.end() - return callback - - -class _This: - """ - Special singleton used for indicating the caller class. - Used by ``using``. - """ - -this = _This() - - -def using(_other, **kwargs): - """ - Callback that processes the match with a different lexer. - - The keyword arguments are forwarded to the lexer, except `state` which - is handled separately. - - `state` specifies the state that the new lexer will start in, and can - be an enumerable such as ('root', 'inline', 'string') or a simple - string which is assumed to be on top of the root state. - - Note: For that to work, `_other` must not be an `ExtendedRegexLexer`. - """ - gt_kwargs = {} - if 'state' in kwargs: - s = kwargs.pop('state') - if isinstance(s, (list, tuple)): - gt_kwargs['stack'] = s - else: - gt_kwargs['stack'] = ('root', s) - - if _other is this: - def callback(lexer, match, ctx=None): - # if keyword arguments are given the callback - # function has to create a new lexer instance - if kwargs: - # XXX: cache that somehow - kwargs.update(lexer.options) - lx = lexer.__class__(**kwargs) - else: - lx = lexer - s = match.start() - for i, t, v in lx.get_tokens_unprocessed(match.group(), **gt_kwargs): - yield i + s, t, v - if ctx: - ctx.pos = match.end() - else: - def callback(lexer, match, ctx=None): - # XXX: cache that somehow - kwargs.update(lexer.options) - lx = _other(**kwargs) - - s = match.start() - for i, t, v in lx.get_tokens_unprocessed(match.group(), **gt_kwargs): - yield i + s, t, v - if ctx: - ctx.pos = match.end() - return callback - - -class default: - """ - Indicates a state or state action (e.g. #pop) to apply. - For example default('#pop') is equivalent to ('', Token, '#pop') - Note that state tuples may be used as well. - - .. versionadded:: 2.0 - """ - def __init__(self, state): - self.state = state - - -class words(Future): - """ - Indicates a list of literal words that is transformed into an optimized - regex that matches any of the words. - - .. versionadded:: 2.0 - """ - def __init__(self, words, prefix='', suffix=''): - self.words = words - self.prefix = prefix - self.suffix = suffix - - def get(self): - return regex_opt(self.words, prefix=self.prefix, suffix=self.suffix) - - -class RegexLexerMeta(LexerMeta): - """ - Metaclass for RegexLexer, creates the self._tokens attribute from - self.tokens on the first instantiation. - """ - - def _process_regex(cls, regex, rflags, state): - """Preprocess the regular expression component of a token definition.""" - if isinstance(regex, Future): - regex = regex.get() - return re.compile(regex, rflags).match - - def _process_token(cls, token): - """Preprocess the token component of a token definition.""" - assert type(token) is _TokenType or callable(token), \ - 'token type must be simple type or callable, not %r' % (token,) - return token - - def _process_new_state(cls, new_state, unprocessed, processed): - """Preprocess the state transition action of a token definition.""" - if isinstance(new_state, str): - # an existing state - if new_state == '#pop': - return -1 - elif new_state in unprocessed: - return (new_state,) - elif new_state == '#push': - return new_state - elif new_state[:5] == '#pop:': - return -int(new_state[5:]) - else: - assert False, 'unknown new state %r' % new_state - elif isinstance(new_state, combined): - # combine a new state from existing ones - tmp_state = '_tmp_%d' % cls._tmpname - cls._tmpname += 1 - itokens = [] - for istate in new_state: - assert istate != new_state, 'circular state ref %r' % istate - itokens.extend(cls._process_state(unprocessed, - processed, istate)) - processed[tmp_state] = itokens - return (tmp_state,) - elif isinstance(new_state, tuple): - # push more than one state - for istate in new_state: - assert (istate in unprocessed or - istate in ('#pop', '#push')), \ - 'unknown new state ' + istate - return new_state - else: - assert False, 'unknown new state def %r' % new_state - - def _process_state(cls, unprocessed, processed, state): - """Preprocess a single state definition.""" - assert type(state) is str, "wrong state name %r" % state - assert state[0] != '#', "invalid state name %r" % state - if state in processed: - return processed[state] - tokens = processed[state] = [] - rflags = cls.flags - for tdef in unprocessed[state]: - if isinstance(tdef, include): - # it's a state reference - assert tdef != state, "circular state reference %r" % state - tokens.extend(cls._process_state(unprocessed, processed, - str(tdef))) - continue - if isinstance(tdef, _inherit): - # should be processed already, but may not in the case of: - # 1. the state has no counterpart in any parent - # 2. the state includes more than one 'inherit' - continue - if isinstance(tdef, default): - new_state = cls._process_new_state(tdef.state, unprocessed, processed) - tokens.append((re.compile('').match, None, new_state)) - continue - - assert type(tdef) is tuple, "wrong rule def %r" % tdef - - try: - rex = cls._process_regex(tdef[0], rflags, state) - except Exception as err: - raise ValueError("uncompilable regex %r in state %r of %r: %s" % - (tdef[0], state, cls, err)) from err - - token = cls._process_token(tdef[1]) - - if len(tdef) == 2: - new_state = None - else: - new_state = cls._process_new_state(tdef[2], - unprocessed, processed) - - tokens.append((rex, token, new_state)) - return tokens - - def process_tokendef(cls, name, tokendefs=None): - """Preprocess a dictionary of token definitions.""" - processed = cls._all_tokens[name] = {} - tokendefs = tokendefs or cls.tokens[name] - for state in list(tokendefs): - cls._process_state(tokendefs, processed, state) - return processed - - def get_tokendefs(cls): - """ - Merge tokens from superclasses in MRO order, returning a single tokendef - dictionary. - - Any state that is not defined by a subclass will be inherited - automatically. States that *are* defined by subclasses will, by - default, override that state in the superclass. If a subclass wishes to - inherit definitions from a superclass, it can use the special value - "inherit", which will cause the superclass' state definition to be - included at that point in the state. - """ - tokens = {} - inheritable = {} - for c in cls.__mro__: - toks = c.__dict__.get('tokens', {}) - - for state, items in toks.items(): - curitems = tokens.get(state) - if curitems is None: - # N.b. because this is assigned by reference, sufficiently - # deep hierarchies are processed incrementally (e.g. for - # A(B), B(C), C(RegexLexer), B will be premodified so X(B) - # will not see any inherits in B). - tokens[state] = items - try: - inherit_ndx = items.index(inherit) - except ValueError: - continue - inheritable[state] = inherit_ndx - continue - - inherit_ndx = inheritable.pop(state, None) - if inherit_ndx is None: - continue - - # Replace the "inherit" value with the items - curitems[inherit_ndx:inherit_ndx+1] = items - try: - # N.b. this is the index in items (that is, the superclass - # copy), so offset required when storing below. - new_inh_ndx = items.index(inherit) - except ValueError: - pass - else: - inheritable[state] = inherit_ndx + new_inh_ndx - - return tokens - - def __call__(cls, *args, **kwds): - """Instantiate cls after preprocessing its token definitions.""" - if '_tokens' not in cls.__dict__: - cls._all_tokens = {} - cls._tmpname = 0 - if hasattr(cls, 'token_variants') and cls.token_variants: - # don't process yet - pass - else: - cls._tokens = cls.process_tokendef('', cls.get_tokendefs()) - - return type.__call__(cls, *args, **kwds) - - -class RegexLexer(Lexer, metaclass=RegexLexerMeta): - """ - Base for simple stateful regular expression-based lexers. - Simplifies the lexing process so that you need only - provide a list of states and regular expressions. - """ - - #: Flags for compiling the regular expressions. - #: Defaults to MULTILINE. - flags = re.MULTILINE - - #: At all time there is a stack of states. Initially, the stack contains - #: a single state 'root'. The top of the stack is called "the current state". - #: - #: Dict of ``{'state': [(regex, tokentype, new_state), ...], ...}`` - #: - #: ``new_state`` can be omitted to signify no state transition. - #: If ``new_state`` is a string, it is pushed on the stack. This ensure - #: the new current state is ``new_state``. - #: If ``new_state`` is a tuple of strings, all of those strings are pushed - #: on the stack and the current state will be the last element of the list. - #: ``new_state`` can also be ``combined('state1', 'state2', ...)`` - #: to signify a new, anonymous state combined from the rules of two - #: or more existing ones. - #: Furthermore, it can be '#pop' to signify going back one step in - #: the state stack, or '#push' to push the current state on the stack - #: again. Note that if you push while in a combined state, the combined - #: state itself is pushed, and not only the state in which the rule is - #: defined. - #: - #: The tuple can also be replaced with ``include('state')``, in which - #: case the rules from the state named by the string are included in the - #: current one. - tokens = {} - - def get_tokens_unprocessed(self, text, stack=('root',)): - """ - Split ``text`` into (tokentype, text) pairs. - - ``stack`` is the initial stack (default: ``['root']``) - """ - pos = 0 - tokendefs = self._tokens - statestack = list(stack) - statetokens = tokendefs[statestack[-1]] - while 1: - for rexmatch, action, new_state in statetokens: - m = rexmatch(text, pos) - if m: - if action is not None: - if type(action) is _TokenType: - yield pos, action, m.group() - else: - yield from action(self, m) - pos = m.end() - if new_state is not None: - # state transition - if isinstance(new_state, tuple): - for state in new_state: - if state == '#pop': - if len(statestack) > 1: - statestack.pop() - elif state == '#push': - statestack.append(statestack[-1]) - else: - statestack.append(state) - elif isinstance(new_state, int): - # pop, but keep at least one state on the stack - # (random code leading to unexpected pops should - # not allow exceptions) - if abs(new_state) >= len(statestack): - del statestack[1:] - else: - del statestack[new_state:] - elif new_state == '#push': - statestack.append(statestack[-1]) - else: - assert False, "wrong state def: %r" % new_state - statetokens = tokendefs[statestack[-1]] - break - else: - # We are here only if all state tokens have been considered - # and there was not a match on any of them. - try: - if text[pos] == '\n': - # at EOL, reset state to "root" - statestack = ['root'] - statetokens = tokendefs['root'] - yield pos, Whitespace, '\n' - pos += 1 - continue - yield pos, Error, text[pos] - pos += 1 - except IndexError: - break - - -class LexerContext: - """ - A helper object that holds lexer position data. - """ - - def __init__(self, text, pos, stack=None, end=None): - self.text = text - self.pos = pos - self.end = end or len(text) # end=0 not supported ;-) - self.stack = stack or ['root'] - - def __repr__(self): - return 'LexerContext(%r, %r, %r)' % ( - self.text, self.pos, self.stack) - - -class ExtendedRegexLexer(RegexLexer): - """ - A RegexLexer that uses a context object to store its state. - """ - - def get_tokens_unprocessed(self, text=None, context=None): - """ - Split ``text`` into (tokentype, text) pairs. - If ``context`` is given, use this lexer context instead. - """ - tokendefs = self._tokens - if not context: - ctx = LexerContext(text, 0) - statetokens = tokendefs['root'] - else: - ctx = context - statetokens = tokendefs[ctx.stack[-1]] - text = ctx.text - while 1: - for rexmatch, action, new_state in statetokens: - m = rexmatch(text, ctx.pos, ctx.end) - if m: - if action is not None: - if type(action) is _TokenType: - yield ctx.pos, action, m.group() - ctx.pos = m.end() - else: - yield from action(self, m, ctx) - if not new_state: - # altered the state stack? - statetokens = tokendefs[ctx.stack[-1]] - # CAUTION: callback must set ctx.pos! - if new_state is not None: - # state transition - if isinstance(new_state, tuple): - for state in new_state: - if state == '#pop': - if len(ctx.stack) > 1: - ctx.stack.pop() - elif state == '#push': - ctx.stack.append(ctx.stack[-1]) - else: - ctx.stack.append(state) - elif isinstance(new_state, int): - # see RegexLexer for why this check is made - if abs(new_state) >= len(ctx.stack): - del ctx.stack[1:] - else: - del ctx.stack[new_state:] - elif new_state == '#push': - ctx.stack.append(ctx.stack[-1]) - else: - assert False, "wrong state def: %r" % new_state - statetokens = tokendefs[ctx.stack[-1]] - break - else: - try: - if ctx.pos >= ctx.end: - break - if text[ctx.pos] == '\n': - # at EOL, reset state to "root" - ctx.stack = ['root'] - statetokens = tokendefs['root'] - yield ctx.pos, Text, '\n' - ctx.pos += 1 - continue - yield ctx.pos, Error, text[ctx.pos] - ctx.pos += 1 - except IndexError: - break - - -def do_insertions(insertions, tokens): - """ - Helper for lexers which must combine the results of several - sublexers. - - ``insertions`` is a list of ``(index, itokens)`` pairs. - Each ``itokens`` iterable should be inserted at position - ``index`` into the token stream given by the ``tokens`` - argument. - - The result is a combined token stream. - - TODO: clean up the code here. - """ - insertions = iter(insertions) - try: - index, itokens = next(insertions) - except StopIteration: - # no insertions - yield from tokens - return - - realpos = None - insleft = True - - # iterate over the token stream where we want to insert - # the tokens from the insertion list. - for i, t, v in tokens: - # first iteration. store the position of first item - if realpos is None: - realpos = i - oldi = 0 - while insleft and i + len(v) >= index: - tmpval = v[oldi:index - i] - if tmpval: - yield realpos, t, tmpval - realpos += len(tmpval) - for it_index, it_token, it_value in itokens: - yield realpos, it_token, it_value - realpos += len(it_value) - oldi = index - i - try: - index, itokens = next(insertions) - except StopIteration: - insleft = False - break # not strictly necessary - if oldi < len(v): - yield realpos, t, v[oldi:] - realpos += len(v) - oldi - - # leftover tokens - while insleft: - # no normal tokens, set realpos to zero - realpos = realpos or 0 - for p, t, v in itokens: - yield realpos, t, v - realpos += len(v) - try: - index, itokens = next(insertions) - except StopIteration: - insleft = False - break # not strictly necessary - - -class ProfilingRegexLexerMeta(RegexLexerMeta): - """Metaclass for ProfilingRegexLexer, collects regex timing info.""" - - def _process_regex(cls, regex, rflags, state): - if isinstance(regex, words): - rex = regex_opt(regex.words, prefix=regex.prefix, - suffix=regex.suffix) - else: - rex = regex - compiled = re.compile(rex, rflags) - - def match_func(text, pos, endpos=sys.maxsize): - info = cls._prof_data[-1].setdefault((state, rex), [0, 0.0]) - t0 = time.time() - res = compiled.match(text, pos, endpos) - t1 = time.time() - info[0] += 1 - info[1] += t1 - t0 - return res - return match_func - - -class ProfilingRegexLexer(RegexLexer, metaclass=ProfilingRegexLexerMeta): - """Drop-in replacement for RegexLexer that does profiling of its regexes.""" - - _prof_data = [] - _prof_sort_index = 4 # defaults to time per call - - def get_tokens_unprocessed(self, text, stack=('root',)): - # this needs to be a stack, since using(this) will produce nested calls - self.__class__._prof_data.append({}) - yield from RegexLexer.get_tokens_unprocessed(self, text, stack) - rawdata = self.__class__._prof_data.pop() - data = sorted(((s, repr(r).strip('u\'').replace('\\\\', '\\')[:65], - n, 1000 * t, 1000 * t / n) - for ((s, r), (n, t)) in rawdata.items()), - key=lambda x: x[self._prof_sort_index], - reverse=True) - sum_total = sum(x[3] for x in data) - - print() - print('Profiling result for %s lexing %d chars in %.3f ms' % - (self.__class__.__name__, len(text), sum_total)) - print('=' * 110) - print('%-20s %-64s ncalls tottime percall' % ('state', 'regex')) - print('-' * 110) - for d in data: - print('%-20s %-65s %5d %8.4f %8.4f' % d) - print('=' * 110) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/__init__.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/__init__.py deleted file mode 100644 index b3ac0146cb3f4cb1894f55fc09775875bc4e1177..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/__init__.py +++ /dev/null @@ -1,24 +0,0 @@ -"""distutils - -The main package for the Python Module Distribution Utilities. Normally -used from a setup script as - - from distutils.core import setup - - setup (...) -""" - -import sys -import importlib - -__version__ = sys.version[: sys.version.index(' ')] - - -try: - # Allow Debian and pkgsrc (only) to customize system - # behavior. Ref pypa/distutils#2 and pypa/distutils#16. - # This hook is deprecated and no other environments - # should use it. - importlib.import_module('_distutils_system_mod') -except ImportError: - pass diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/build_meta.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/build_meta.py deleted file mode 100644 index e8f1c72d598d6d5a03b75f68a6d567b1d6b1e9a2..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/build_meta.py +++ /dev/null @@ -1,511 +0,0 @@ -"""A PEP 517 interface to setuptools - -Previously, when a user or a command line tool (let's call it a "frontend") -needed to make a request of setuptools to take a certain action, for -example, generating a list of installation requirements, the frontend would -would call "setup.py egg_info" or "setup.py bdist_wheel" on the command line. - -PEP 517 defines a different method of interfacing with setuptools. Rather -than calling "setup.py" directly, the frontend should: - - 1. Set the current directory to the directory with a setup.py file - 2. Import this module into a safe python interpreter (one in which - setuptools can potentially set global variables or crash hard). - 3. Call one of the functions defined in PEP 517. - -What each function does is defined in PEP 517. However, here is a "casual" -definition of the functions (this definition should not be relied on for -bug reports or API stability): - - - `build_wheel`: build a wheel in the folder and return the basename - - `get_requires_for_build_wheel`: get the `setup_requires` to build - - `prepare_metadata_for_build_wheel`: get the `install_requires` - - `build_sdist`: build an sdist in the folder and return the basename - - `get_requires_for_build_sdist`: get the `setup_requires` to build - -Again, this is not a formal definition! Just a "taste" of the module. -""" - -import io -import os -import shlex -import sys -import tokenize -import shutil -import contextlib -import tempfile -import warnings -from pathlib import Path -from typing import Dict, Iterator, List, Optional, Union - -import setuptools -import distutils -from . import errors -from ._path import same_path -from ._reqs import parse_strings -from ._deprecation_warning import SetuptoolsDeprecationWarning -from distutils.util import strtobool - - -__all__ = ['get_requires_for_build_sdist', - 'get_requires_for_build_wheel', - 'prepare_metadata_for_build_wheel', - 'build_wheel', - 'build_sdist', - 'get_requires_for_build_editable', - 'prepare_metadata_for_build_editable', - 'build_editable', - '__legacy__', - 'SetupRequirementsError'] - -SETUPTOOLS_ENABLE_FEATURES = os.getenv("SETUPTOOLS_ENABLE_FEATURES", "").lower() -LEGACY_EDITABLE = "legacy-editable" in SETUPTOOLS_ENABLE_FEATURES.replace("_", "-") - - -class SetupRequirementsError(BaseException): - def __init__(self, specifiers): - self.specifiers = specifiers - - -class Distribution(setuptools.dist.Distribution): - def fetch_build_eggs(self, specifiers): - specifier_list = list(parse_strings(specifiers)) - - raise SetupRequirementsError(specifier_list) - - @classmethod - @contextlib.contextmanager - def patch(cls): - """ - Replace - distutils.dist.Distribution with this class - for the duration of this context. - """ - orig = distutils.core.Distribution - distutils.core.Distribution = cls - try: - yield - finally: - distutils.core.Distribution = orig - - -@contextlib.contextmanager -def no_install_setup_requires(): - """Temporarily disable installing setup_requires - - Under PEP 517, the backend reports build dependencies to the frontend, - and the frontend is responsible for ensuring they're installed. - So setuptools (acting as a backend) should not try to install them. - """ - orig = setuptools._install_setup_requires - setuptools._install_setup_requires = lambda attrs: None - try: - yield - finally: - setuptools._install_setup_requires = orig - - -def _get_immediate_subdirectories(a_dir): - return [name for name in os.listdir(a_dir) - if os.path.isdir(os.path.join(a_dir, name))] - - -def _file_with_extension(directory, extension): - matching = ( - f for f in os.listdir(directory) - if f.endswith(extension) - ) - try: - file, = matching - except ValueError: - raise ValueError( - 'No distribution was found. Ensure that `setup.py` ' - 'is not empty and that it calls `setup()`.') - return file - - -def _open_setup_script(setup_script): - if not os.path.exists(setup_script): - # Supply a default setup.py - return io.StringIO(u"from setuptools import setup; setup()") - - return getattr(tokenize, 'open', open)(setup_script) - - -@contextlib.contextmanager -def suppress_known_deprecation(): - with warnings.catch_warnings(): - warnings.filterwarnings('ignore', 'setup.py install is deprecated') - yield - - -_ConfigSettings = Optional[Dict[str, Union[str, List[str], None]]] -""" -Currently the user can run:: - - pip install -e . --config-settings key=value - python -m build -C--key=value -C key=value - -- pip will pass both key and value as strings and overwriting repeated keys - (pypa/pip#11059). -- build will accumulate values associated with repeated keys in a list. - It will also accept keys with no associated value. - This means that an option passed by build can be ``str | list[str] | None``. -- PEP 517 specifies that ``config_settings`` is an optional dict. -""" - - -class _ConfigSettingsTranslator: - """Translate ``config_settings`` into distutils-style command arguments. - Only a limited number of options is currently supported. - """ - # See pypa/setuptools#1928 pypa/setuptools#2491 - - def _get_config(self, key: str, config_settings: _ConfigSettings) -> List[str]: - """ - Get the value of a specific key in ``config_settings`` as a list of strings. - - >>> fn = _ConfigSettingsTranslator()._get_config - >>> fn("--global-option", None) - [] - >>> fn("--global-option", {}) - [] - >>> fn("--global-option", {'--global-option': 'foo'}) - ['foo'] - >>> fn("--global-option", {'--global-option': ['foo']}) - ['foo'] - >>> fn("--global-option", {'--global-option': 'foo'}) - ['foo'] - >>> fn("--global-option", {'--global-option': 'foo bar'}) - ['foo', 'bar'] - """ - cfg = config_settings or {} - opts = cfg.get(key) or [] - return shlex.split(opts) if isinstance(opts, str) else opts - - def _valid_global_options(self): - """Global options accepted by setuptools (e.g. quiet or verbose).""" - options = (opt[:2] for opt in setuptools.dist.Distribution.global_options) - return {flag for long_and_short in options for flag in long_and_short if flag} - - def _global_args(self, config_settings: _ConfigSettings) -> Iterator[str]: - """ - Let the user specify ``verbose`` or ``quiet`` + escape hatch via - ``--global-option``. - Note: ``-v``, ``-vv``, ``-vvv`` have similar effects in setuptools, - so we just have to cover the basic scenario ``-v``. - - >>> fn = _ConfigSettingsTranslator()._global_args - >>> list(fn(None)) - [] - >>> list(fn({"verbose": "False"})) - ['-q'] - >>> list(fn({"verbose": "1"})) - ['-v'] - >>> list(fn({"--verbose": None})) - ['-v'] - >>> list(fn({"verbose": "true", "--global-option": "-q --no-user-cfg"})) - ['-v', '-q', '--no-user-cfg'] - >>> list(fn({"--quiet": None})) - ['-q'] - """ - cfg = config_settings or {} - falsey = {"false", "no", "0", "off"} - if "verbose" in cfg or "--verbose" in cfg: - level = str(cfg.get("verbose") or cfg.get("--verbose") or "1") - yield ("-q" if level.lower() in falsey else "-v") - if "quiet" in cfg or "--quiet" in cfg: - level = str(cfg.get("quiet") or cfg.get("--quiet") or "1") - yield ("-v" if level.lower() in falsey else "-q") - - valid = self._valid_global_options() - args = self._get_config("--global-option", config_settings) - yield from (arg for arg in args if arg.strip("-") in valid) - - def __dist_info_args(self, config_settings: _ConfigSettings) -> Iterator[str]: - """ - The ``dist_info`` command accepts ``tag-date`` and ``tag-build``. - - .. warning:: - We cannot use this yet as it requires the ``sdist`` and ``bdist_wheel`` - commands run in ``build_sdist`` and ``build_wheel`` to re-use the egg-info - directory created in ``prepare_metadata_for_build_wheel``. - - >>> fn = _ConfigSettingsTranslator()._ConfigSettingsTranslator__dist_info_args - >>> list(fn(None)) - [] - >>> list(fn({"tag-date": "False"})) - ['--no-date'] - >>> list(fn({"tag-date": None})) - ['--no-date'] - >>> list(fn({"tag-date": "true", "tag-build": ".a"})) - ['--tag-date', '--tag-build', '.a'] - """ - cfg = config_settings or {} - if "tag-date" in cfg: - val = strtobool(str(cfg["tag-date"] or "false")) - yield ("--tag-date" if val else "--no-date") - if "tag-build" in cfg: - yield from ["--tag-build", str(cfg["tag-build"])] - - def _editable_args(self, config_settings: _ConfigSettings) -> Iterator[str]: - """ - The ``editable_wheel`` command accepts ``editable-mode=strict``. - - >>> fn = _ConfigSettingsTranslator()._editable_args - >>> list(fn(None)) - [] - >>> list(fn({"editable-mode": "strict"})) - ['--mode', 'strict'] - """ - cfg = config_settings or {} - mode = cfg.get("editable-mode") or cfg.get("editable_mode") - if not mode: - return - yield from ["--mode", str(mode)] - - def _arbitrary_args(self, config_settings: _ConfigSettings) -> Iterator[str]: - """ - Users may expect to pass arbitrary lists of arguments to a command - via "--global-option" (example provided in PEP 517 of a "escape hatch"). - - >>> fn = _ConfigSettingsTranslator()._arbitrary_args - >>> list(fn(None)) - [] - >>> list(fn({})) - [] - >>> list(fn({'--build-option': 'foo'})) - ['foo'] - >>> list(fn({'--build-option': ['foo']})) - ['foo'] - >>> list(fn({'--build-option': 'foo'})) - ['foo'] - >>> list(fn({'--build-option': 'foo bar'})) - ['foo', 'bar'] - >>> warnings.simplefilter('error', SetuptoolsDeprecationWarning) - >>> list(fn({'--global-option': 'foo'})) # doctest: +IGNORE_EXCEPTION_DETAIL - Traceback (most recent call last): - SetuptoolsDeprecationWarning: ...arguments given via `--global-option`... - """ - args = self._get_config("--global-option", config_settings) - global_opts = self._valid_global_options() - bad_args = [] - - for arg in args: - if arg.strip("-") not in global_opts: - bad_args.append(arg) - yield arg - - yield from self._get_config("--build-option", config_settings) - - if bad_args: - msg = f""" - The arguments {bad_args!r} were given via `--global-option`. - Please use `--build-option` instead, - `--global-option` is reserved to flags like `--verbose` or `--quiet`. - """ - warnings.warn(msg, SetuptoolsDeprecationWarning) - - -class _BuildMetaBackend(_ConfigSettingsTranslator): - def _get_build_requires(self, config_settings, requirements): - sys.argv = [ - *sys.argv[:1], - *self._global_args(config_settings), - "egg_info", - *self._arbitrary_args(config_settings), - ] - try: - with Distribution.patch(): - self.run_setup() - except SetupRequirementsError as e: - requirements += e.specifiers - - return requirements - - def run_setup(self, setup_script='setup.py'): - # Note that we can reuse our build directory between calls - # Correctness comes first, then optimization later - __file__ = setup_script - __name__ = '__main__' - - with _open_setup_script(__file__) as f: - code = f.read().replace(r'\r\n', r'\n') - - exec(code, locals()) - - def get_requires_for_build_wheel(self, config_settings=None): - return self._get_build_requires(config_settings, requirements=['wheel']) - - def get_requires_for_build_sdist(self, config_settings=None): - return self._get_build_requires(config_settings, requirements=[]) - - def _bubble_up_info_directory(self, metadata_directory: str, suffix: str) -> str: - """ - PEP 517 requires that the .dist-info directory be placed in the - metadata_directory. To comply, we MUST copy the directory to the root. - - Returns the basename of the info directory, e.g. `proj-0.0.0.dist-info`. - """ - info_dir = self._find_info_directory(metadata_directory, suffix) - if not same_path(info_dir.parent, metadata_directory): - shutil.move(str(info_dir), metadata_directory) - # PEP 517 allow other files and dirs to exist in metadata_directory - return info_dir.name - - def _find_info_directory(self, metadata_directory: str, suffix: str) -> Path: - for parent, dirs, _ in os.walk(metadata_directory): - candidates = [f for f in dirs if f.endswith(suffix)] - - if len(candidates) != 0 or len(dirs) != 1: - assert len(candidates) == 1, f"Multiple {suffix} directories found" - return Path(parent, candidates[0]) - - msg = f"No {suffix} directory found in {metadata_directory}" - raise errors.InternalError(msg) - - def prepare_metadata_for_build_wheel(self, metadata_directory, - config_settings=None): - sys.argv = [ - *sys.argv[:1], - *self._global_args(config_settings), - "dist_info", - "--output-dir", metadata_directory, - "--keep-egg-info", - ] - with no_install_setup_requires(): - self.run_setup() - - self._bubble_up_info_directory(metadata_directory, ".egg-info") - return self._bubble_up_info_directory(metadata_directory, ".dist-info") - - def _build_with_temp_dir(self, setup_command, result_extension, - result_directory, config_settings): - result_directory = os.path.abspath(result_directory) - - # Build in a temporary directory, then copy to the target. - os.makedirs(result_directory, exist_ok=True) - with tempfile.TemporaryDirectory(dir=result_directory) as tmp_dist_dir: - sys.argv = [ - *sys.argv[:1], - *self._global_args(config_settings), - *setup_command, - "--dist-dir", tmp_dist_dir, - *self._arbitrary_args(config_settings), - ] - with no_install_setup_requires(): - self.run_setup() - - result_basename = _file_with_extension( - tmp_dist_dir, result_extension) - result_path = os.path.join(result_directory, result_basename) - if os.path.exists(result_path): - # os.rename will fail overwriting on non-Unix. - os.remove(result_path) - os.rename(os.path.join(tmp_dist_dir, result_basename), result_path) - - return result_basename - - def build_wheel(self, wheel_directory, config_settings=None, - metadata_directory=None): - with suppress_known_deprecation(): - return self._build_with_temp_dir(['bdist_wheel'], '.whl', - wheel_directory, config_settings) - - def build_sdist(self, sdist_directory, config_settings=None): - return self._build_with_temp_dir(['sdist', '--formats', 'gztar'], - '.tar.gz', sdist_directory, - config_settings) - - def _get_dist_info_dir(self, metadata_directory: Optional[str]) -> Optional[str]: - if not metadata_directory: - return None - dist_info_candidates = list(Path(metadata_directory).glob("*.dist-info")) - assert len(dist_info_candidates) <= 1 - return str(dist_info_candidates[0]) if dist_info_candidates else None - - if not LEGACY_EDITABLE: - - # PEP660 hooks: - # build_editable - # get_requires_for_build_editable - # prepare_metadata_for_build_editable - def build_editable( - self, wheel_directory, config_settings=None, metadata_directory=None - ): - # XXX can or should we hide our editable_wheel command normally? - info_dir = self._get_dist_info_dir(metadata_directory) - opts = ["--dist-info-dir", info_dir] if info_dir else [] - cmd = ["editable_wheel", *opts, *self._editable_args(config_settings)] - with suppress_known_deprecation(): - return self._build_with_temp_dir( - cmd, ".whl", wheel_directory, config_settings - ) - - def get_requires_for_build_editable(self, config_settings=None): - return self.get_requires_for_build_wheel(config_settings) - - def prepare_metadata_for_build_editable(self, metadata_directory, - config_settings=None): - return self.prepare_metadata_for_build_wheel( - metadata_directory, config_settings - ) - - -class _BuildMetaLegacyBackend(_BuildMetaBackend): - """Compatibility backend for setuptools - - This is a version of setuptools.build_meta that endeavors - to maintain backwards - compatibility with pre-PEP 517 modes of invocation. It - exists as a temporary - bridge between the old packaging mechanism and the new - packaging mechanism, - and will eventually be removed. - """ - def run_setup(self, setup_script='setup.py'): - # In order to maintain compatibility with scripts assuming that - # the setup.py script is in a directory on the PYTHONPATH, inject - # '' into sys.path. (pypa/setuptools#1642) - sys_path = list(sys.path) # Save the original path - - script_dir = os.path.dirname(os.path.abspath(setup_script)) - if script_dir not in sys.path: - sys.path.insert(0, script_dir) - - # Some setup.py scripts (e.g. in pygame and numpy) use sys.argv[0] to - # get the directory of the source code. They expect it to refer to the - # setup.py script. - sys_argv_0 = sys.argv[0] - sys.argv[0] = setup_script - - try: - super(_BuildMetaLegacyBackend, - self).run_setup(setup_script=setup_script) - finally: - # While PEP 517 frontends should be calling each hook in a fresh - # subprocess according to the standard (and thus it should not be - # strictly necessary to restore the old sys.path), we'll restore - # the original path so that the path manipulation does not persist - # within the hook after run_setup is called. - sys.path[:] = sys_path - sys.argv[0] = sys_argv_0 - - -# The primary backend -_BACKEND = _BuildMetaBackend() - -get_requires_for_build_wheel = _BACKEND.get_requires_for_build_wheel -get_requires_for_build_sdist = _BACKEND.get_requires_for_build_sdist -prepare_metadata_for_build_wheel = _BACKEND.prepare_metadata_for_build_wheel -build_wheel = _BACKEND.build_wheel -build_sdist = _BACKEND.build_sdist - -if not LEGACY_EDITABLE: - get_requires_for_build_editable = _BACKEND.get_requires_for_build_editable - prepare_metadata_for_build_editable = _BACKEND.prepare_metadata_for_build_editable - build_editable = _BACKEND.build_editable - - -# The legacy backend -__legacy__ = _BuildMetaLegacyBackend() diff --git a/spaces/Bl1tzie/Jam/README.md b/spaces/Bl1tzie/Jam/README.md deleted file mode 100644 index 6a2d56d43206a5674f8b936686f161211fec7c05..0000000000000000000000000000000000000000 --- a/spaces/Bl1tzie/Jam/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Jam -emoji: 😻 -colorFrom: green -colorTo: gray -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Boadiwaa/Recipes/openai/cli.py b/spaces/Boadiwaa/Recipes/openai/cli.py deleted file mode 100644 index fd9c8469ad68affdb53220d816d011eea806120f..0000000000000000000000000000000000000000 --- a/spaces/Boadiwaa/Recipes/openai/cli.py +++ /dev/null @@ -1,1018 +0,0 @@ -import datetime -import os -import signal -import sys -import warnings -from functools import partial -from typing import Optional - -import requests - -import openai -import openai.wandb_logger -from openai.upload_progress import BufferReader -from openai.validators import ( - apply_necessary_remediation, - apply_validators, - get_search_validators, - get_validators, - read_any_format, - write_out_file, - write_out_search_file, -) - - -class bcolors: - HEADER = "\033[95m" - OKBLUE = "\033[94m" - OKGREEN = "\033[92m" - WARNING = "\033[93m" - FAIL = "\033[91m" - ENDC = "\033[0m" - BOLD = "\033[1m" - UNDERLINE = "\033[4m" - - -def organization_info(obj): - organization = getattr(obj, "organization", None) - if organization is not None: - return "[organization={}] ".format(organization) - else: - return "" - - -def display(obj): - sys.stderr.write(organization_info(obj)) - sys.stderr.flush() - print(obj) - - -def display_error(e): - extra = ( - " (HTTP status code: {})".format(e.http_status) - if e.http_status is not None - else "" - ) - sys.stderr.write( - "{}{}Error:{} {}{}\n".format( - organization_info(e), bcolors.FAIL, bcolors.ENDC, e, extra - ) - ) - - -class Engine: - @classmethod - def get(cls, args): - engine = openai.Engine.retrieve(id=args.id) - display(engine) - - @classmethod - def update(cls, args): - engine = openai.Engine.modify(args.id, replicas=args.replicas) - display(engine) - - @classmethod - def generate(cls, args): - warnings.warn( - "Engine.generate is deprecated, use Completion.create", DeprecationWarning - ) - if args.completions and args.completions > 1 and args.stream: - raise ValueError("Can't stream multiple completions with openai CLI") - - kwargs = {} - if args.model is not None: - kwargs["model"] = args.model - resp = openai.Engine(id=args.id).generate( - completions=args.completions, - context=args.context, - length=args.length, - stream=args.stream, - temperature=args.temperature, - top_p=args.top_p, - logprobs=args.logprobs, - stop=args.stop, - **kwargs, - ) - if not args.stream: - resp = [resp] - - for part in resp: - completions = len(part["data"]) - for c_idx, c in enumerate(part["data"]): - if completions > 1: - sys.stdout.write("===== Completion {} =====\n".format(c_idx)) - sys.stdout.write("".join(c["text"])) - if completions > 1: - sys.stdout.write("\n") - sys.stdout.flush() - - @classmethod - def search(cls, args): - params = { - "query": args.query, - "max_rerank": args.max_rerank, - "return_metadata": args.return_metadata, - } - if args.documents: - params["documents"] = args.documents - if args.file: - params["file"] = args.file - - if args.version: - params["version"] = args.version - - resp = openai.Engine(id=args.id).search(**params) - scores = [ - (search_result["score"], search_result["document"]) - for search_result in resp["data"] - ] - scores.sort(reverse=True) - dataset = ( - args.documents if args.documents else [x["text"] for x in resp["data"]] - ) - for score, document_idx in scores: - print("=== score {:.3f} ===".format(score)) - print(dataset[document_idx]) - if ( - args.return_metadata - and args.file - and "metadata" in resp["data"][document_idx] - ): - print(f"METADATA: {resp['data'][document_idx]['metadata']}") - - @classmethod - def list(cls, args): - engines = openai.Engine.list() - display(engines) - - -class Completion: - @classmethod - def create(cls, args): - if args.n is not None and args.n > 1 and args.stream: - raise ValueError("Can't stream completions with n>1 with the current CLI") - - if args.engine and args.model: - warnings.warn( - "In most cases, you should not be specifying both engine and model." - ) - - resp = openai.Completion.create( - engine=args.engine, - model=args.model, - n=args.n, - max_tokens=args.max_tokens, - logprobs=args.logprobs, - prompt=args.prompt, - stream=args.stream, - temperature=args.temperature, - top_p=args.top_p, - stop=args.stop, - echo=True, - ) - if not args.stream: - resp = [resp] - - for part in resp: - choices = part["choices"] - for c_idx, c in enumerate(sorted(choices, key=lambda s: s["index"])): - if len(choices) > 1: - sys.stdout.write("===== Completion {} =====\n".format(c_idx)) - sys.stdout.write(c["text"]) - if len(choices) > 1: - sys.stdout.write("\n") - sys.stdout.flush() - - -class Model: - @classmethod - def get(cls, args): - resp = openai.Model.retrieve(id=args.id) - print(resp) - - @classmethod - def delete(cls, args): - model = openai.Model.delete(args.id) - print(model) - - @classmethod - def list(cls, args): - models = openai.Model.list() - print(models) - - -class File: - @classmethod - def create(cls, args): - with open(args.file, "rb") as file_reader: - buffer_reader = BufferReader(file_reader.read(), desc="Upload progress") - resp = openai.File.create( - file=buffer_reader, - purpose=args.purpose, - model=args.model, - user_provided_filename=args.file, - ) - print(resp) - - @classmethod - def get(cls, args): - resp = openai.File.retrieve(id=args.id) - print(resp) - - @classmethod - def delete(cls, args): - file = openai.File.delete(args.id) - print(file) - - @classmethod - def list(cls, args): - file = openai.File.list() - print(file) - - -class Search: - @classmethod - def prepare_data(cls, args, purpose): - - sys.stdout.write("Analyzing...\n") - fname = args.file - auto_accept = args.quiet - - optional_fields = ["metadata"] - - if purpose == "classifications": - required_fields = ["text", "label"] - else: - required_fields = ["text"] - - df, remediation = read_any_format( - fname, fields=required_fields + optional_fields - ) - - if "metadata" not in df: - df["metadata"] = None - - apply_necessary_remediation(None, remediation) - validators = get_search_validators(required_fields, optional_fields) - - write_out_file_func = partial( - write_out_search_file, - purpose=purpose, - fields=required_fields + optional_fields, - ) - - apply_validators( - df, fname, remediation, validators, auto_accept, write_out_file_func - ) - - @classmethod - def create(cls, args): - resp = openai.Search.create( - query=args.query, - documents=args.documents, - model=args.model, - ) - print(resp) - - -class FineTune: - @classmethod - def list(cls, args): - resp = openai.FineTune.list() - print(resp) - - @classmethod - def _is_url(cls, file: str): - return file.lower().startswith("http") - - @classmethod - def _download_file_from_public_url(cls, url: str) -> Optional[bytes]: - resp = requests.get(url) - if resp.status_code == 200: - return resp.content - else: - return None - - @classmethod - def _maybe_upload_file( - cls, - file: Optional[str] = None, - content: Optional[bytes] = None, - user_provided_file: Optional[str] = None, - check_if_file_exists: bool = True, - ): - # Exactly one of `file` or `content` must be provided - if (file is None) == (content is None): - raise ValueError("Exactly one of `file` or `content` must be provided") - - if content is None: - assert file is not None - with open(file, "rb") as f: - content = f.read() - - if check_if_file_exists: - bytes = len(content) - matching_files = openai.File.find_matching_files( - name=user_provided_file or f.name, bytes=bytes, purpose="fine-tune" - ) - if len(matching_files) > 0: - file_ids = [f["id"] for f in matching_files] - sys.stdout.write( - "Found potentially duplicated files with name '{name}', purpose 'fine-tune' and size {size} bytes\n".format( - name=os.path.basename(matching_files[0]["filename"]), - size=matching_files[0]["bytes"] if "bytes" in matching_files[0] else matching_files[0]["size"], - ) - ) - sys.stdout.write("\n".join(file_ids)) - while True: - sys.stdout.write( - "\nEnter file ID to reuse an already uploaded file, or an empty string to upload this file anyway: " - ) - inp = sys.stdin.readline().strip() - if inp in file_ids: - sys.stdout.write( - "Reusing already uploaded file: {id}\n".format(id=inp) - ) - return inp - elif inp == "": - break - else: - sys.stdout.write( - "File id '{id}' is not among the IDs of the potentially duplicated files\n".format( - id=inp - ) - ) - - buffer_reader = BufferReader(content, desc="Upload progress") - resp = openai.File.create( - file=buffer_reader, - purpose="fine-tune", - user_provided_filename=user_provided_file or file, - ) - sys.stdout.write( - "Uploaded file from {file}: {id}\n".format( - file=user_provided_file or file, id=resp["id"] - ) - ) - return resp["id"] - - @classmethod - def _get_or_upload(cls, file, check_if_file_exists=True): - try: - # 1. If it's a valid file, use it - openai.File.retrieve(file) - return file - except openai.error.InvalidRequestError: - pass - if os.path.isfile(file): - # 2. If it's a file on the filesystem, upload it - return cls._maybe_upload_file( - file=file, check_if_file_exists=check_if_file_exists - ) - if cls._is_url(file): - # 3. If it's a URL, download it temporarily - content = cls._download_file_from_public_url(file) - if content is not None: - return cls._maybe_upload_file( - content=content, - check_if_file_exists=check_if_file_exists, - user_provided_file=file, - ) - return file - - @classmethod - def create(cls, args): - create_args = { - "training_file": cls._get_or_upload( - args.training_file, args.check_if_files_exist - ), - } - if args.validation_file: - create_args["validation_file"] = cls._get_or_upload( - args.validation_file, args.check_if_files_exist - ) - - for hparam in ( - "model", - "suffix", - "n_epochs", - "batch_size", - "learning_rate_multiplier", - "prompt_loss_weight", - "compute_classification_metrics", - "classification_n_classes", - "classification_positive_class", - "classification_betas", - ): - attr = getattr(args, hparam) - if attr is not None: - create_args[hparam] = attr - - resp = openai.FineTune.create(**create_args) - - if args.no_follow: - print(resp) - return - - sys.stdout.write( - "Created fine-tune: {job_id}\n" - "Streaming events until fine-tuning is complete...\n\n" - "(Ctrl-C will interrupt the stream, but not cancel the fine-tune)\n".format( - job_id=resp["id"] - ) - ) - cls._stream_events(resp["id"]) - - @classmethod - def get(cls, args): - resp = openai.FineTune.retrieve(id=args.id) - print(resp) - - @classmethod - def results(cls, args): - fine_tune = openai.FineTune.retrieve(id=args.id) - if "result_files" not in fine_tune or len(fine_tune["result_files"]) == 0: - raise openai.error.InvalidRequestError( - f"No results file available for fine-tune {args.id}", "id" - ) - result_file = openai.FineTune.retrieve(id=args.id)["result_files"][0] - resp = openai.File.download(id=result_file["id"]) - print(resp.decode("utf-8")) - - @classmethod - def events(cls, args): - if args.stream: - raise openai.error.OpenAIError( - message=( - "The --stream parameter is deprecated, use fine_tunes.follow " - "instead:\n\n" - " openai api fine_tunes.follow -i {id}\n".format(id=args.id) - ), - ) - - resp = openai.FineTune.list_events(id=args.id) # type: ignore - print(resp) - - @classmethod - def follow(cls, args): - cls._stream_events(args.id) - - @classmethod - def _stream_events(cls, job_id): - def signal_handler(sig, frame): - status = openai.FineTune.retrieve(job_id).status - sys.stdout.write( - "\nStream interrupted. Job is still {status}.\n" - "To resume the stream, run:\n\n" - " openai api fine_tunes.follow -i {job_id}\n\n" - "To cancel your job, run:\n\n" - " openai api fine_tunes.cancel -i {job_id}\n\n".format( - status=status, job_id=job_id - ) - ) - sys.exit(0) - - signal.signal(signal.SIGINT, signal_handler) - - events = openai.FineTune.stream_events(job_id) - # TODO(rachel): Add a nifty spinner here. - try: - for event in events: - sys.stdout.write( - "[%s] %s" - % ( - datetime.datetime.fromtimestamp(event["created_at"]), - event["message"], - ) - ) - sys.stdout.write("\n") - sys.stdout.flush() - except Exception: - sys.stdout.write( - "\nStream interrupted (client disconnected).\n" - "To resume the stream, run:\n\n" - " openai api fine_tunes.follow -i {job_id}\n\n".format(job_id=job_id) - ) - return - - resp = openai.FineTune.retrieve(id=job_id) - status = resp["status"] - if status == "succeeded": - sys.stdout.write("\nJob complete! Status: succeeded 🎉") - sys.stdout.write( - "\nTry out your fine-tuned model:\n\n" - "openai api completions.create -m {model} -p ".format( - model=resp["fine_tuned_model"] - ) - ) - elif status == "failed": - sys.stdout.write( - "\nJob failed. Please contact support@openai.com if you need assistance." - ) - sys.stdout.write("\n") - - @classmethod - def cancel(cls, args): - resp = openai.FineTune.cancel(id=args.id) - print(resp) - - @classmethod - def prepare_data(cls, args): - - sys.stdout.write("Analyzing...\n") - fname = args.file - auto_accept = args.quiet - df, remediation = read_any_format(fname) - apply_necessary_remediation(None, remediation) - - validators = get_validators() - - apply_validators( - df, - fname, - remediation, - validators, - auto_accept, - write_out_file_func=write_out_file, - ) - - -class WandbLogger: - @classmethod - def sync(cls, args): - resp = openai.wandb_logger.WandbLogger.sync( - id=args.id, - n_fine_tunes=args.n_fine_tunes, - project=args.project, - entity=args.entity, - force=args.force, - ) - print(resp) - - -def tools_register(parser): - subparsers = parser.add_subparsers( - title="Tools", help="Convenience client side tools" - ) - - def help(args): - parser.print_help() - - parser.set_defaults(func=help) - - sub = subparsers.add_parser("fine_tunes.prepare_data") - sub.add_argument( - "-f", - "--file", - required=True, - help="JSONL, JSON, CSV, TSV, TXT or XLSX file containing prompt-completion examples to be analyzed." - "This should be the local file path.", - ) - sub.add_argument( - "-q", - "--quiet", - required=False, - action="store_true", - help="Auto accepts all suggestions, without asking for user input. To be used within scripts.", - ) - sub.set_defaults(func=FineTune.prepare_data) - - sub = subparsers.add_parser("search.prepare_data") - sub.add_argument( - "-f", - "--file", - required=True, - help="JSONL, JSON, CSV, TSV, TXT or XLSX file containing text examples to be analyzed." - "This should be the local file path.", - ) - sub.add_argument( - "-q", - "--quiet", - required=False, - action="store_true", - help="Auto accepts all suggestions, without asking for user input. To be used within scripts.", - ) - sub.set_defaults(func=partial(Search.prepare_data, purpose="search")) - - sub = subparsers.add_parser("classifications.prepare_data") - sub.add_argument( - "-f", - "--file", - required=True, - help="JSONL, JSON, CSV, TSV, TXT or XLSX file containing text-label examples to be analyzed." - "This should be the local file path.", - ) - sub.add_argument( - "-q", - "--quiet", - required=False, - action="store_true", - help="Auto accepts all suggestions, without asking for user input. To be used within scripts.", - ) - sub.set_defaults(func=partial(Search.prepare_data, purpose="classifications")) - - sub = subparsers.add_parser("answers.prepare_data") - sub.add_argument( - "-f", - "--file", - required=True, - help="JSONL, JSON, CSV, TSV, TXT or XLSX file containing text examples to be analyzed." - "This should be the local file path.", - ) - sub.add_argument( - "-q", - "--quiet", - required=False, - action="store_true", - help="Auto accepts all suggestions, without asking for user input. To be used within scripts.", - ) - sub.set_defaults(func=partial(Search.prepare_data, purpose="answer")) - - -def api_register(parser): - # Engine management - subparsers = parser.add_subparsers(help="All API subcommands") - - def help(args): - parser.print_help() - - parser.set_defaults(func=help) - - sub = subparsers.add_parser("engines.list") - sub.set_defaults(func=Engine.list) - - sub = subparsers.add_parser("engines.get") - sub.add_argument("-i", "--id", required=True) - sub.set_defaults(func=Engine.get) - - sub = subparsers.add_parser("engines.update") - sub.add_argument("-i", "--id", required=True) - sub.add_argument("-r", "--replicas", type=int) - sub.set_defaults(func=Engine.update) - - sub = subparsers.add_parser("engines.generate") - sub.add_argument("-i", "--id", required=True) - sub.add_argument( - "--stream", help="Stream tokens as they're ready.", action="store_true" - ) - sub.add_argument("-c", "--context", help="An optional context to generate from") - sub.add_argument("-l", "--length", help="How many tokens to generate", type=int) - sub.add_argument( - "-t", - "--temperature", - help="""What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. - -Mutually exclusive with `top_p`.""", - type=float, - ) - sub.add_argument( - "-p", - "--top_p", - help="""An alternative to sampling with temperature, called nucleus sampling, where the considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10%% probability mass are considered. - - Mutually exclusive with `temperature`.""", - type=float, - ) - sub.add_argument( - "-n", - "--completions", - help="How many parallel completions to run on this context", - type=int, - ) - sub.add_argument( - "--logprobs", - help="Include the log probabilites on the `logprobs` most likely tokens. So for example, if `logprobs` is 10, the API will return a list of the 10 most likely tokens. If `logprobs` is supplied, the API will always return the logprob of the generated token, so there may be up to `logprobs+1` elements in the response.", - type=int, - ) - sub.add_argument( - "--stop", help="A stop sequence at which to stop generating tokens." - ) - sub.add_argument( - "-m", - "--model", - required=False, - help="A model (most commonly a model ID) to generate from. Defaults to the engine's default model.", - ) - sub.set_defaults(func=Engine.generate) - - sub = subparsers.add_parser("engines.search") - sub.add_argument("-i", "--id", required=True) - sub.add_argument( - "-d", - "--documents", - action="append", - help="List of documents to search over. Only one of `documents` or `file` may be supplied.", - required=False, - ) - sub.add_argument( - "-f", - "--file", - help="A file id to search over. Only one of `documents` or `file` may be supplied.", - required=False, - ) - sub.add_argument( - "--max_rerank", - help="The maximum number of documents to be re-ranked and returned by search. This flag only takes effect when `file` is set.", - type=int, - default=200, - ) - sub.add_argument( - "--return_metadata", - help="A special boolean flag for showing metadata. If set `true`, each document entry in the returned json will contain a 'metadata' field. Default to be `false`. This flag only takes effect when `file` is set.", - type=bool, - default=False, - ) - sub.add_argument( - "--version", - help="The version of the search routing to use", - ) - - sub.add_argument("-q", "--query", required=True, help="Search query") - sub.set_defaults(func=Engine.search) - - # Completions - sub = subparsers.add_parser("completions.create") - sub.add_argument( - "-e", - "--engine", - help="The engine to use. See https://beta.openai.com/docs/engines for more about what engines are available.", - ) - sub.add_argument( - "-m", - "--model", - help="The model to use. At most one of `engine` or `model` should be specified.", - ) - sub.add_argument( - "--stream", help="Stream tokens as they're ready.", action="store_true" - ) - sub.add_argument("-p", "--prompt", help="An optional prompt to complete from") - sub.add_argument( - "-M", "--max-tokens", help="The maximum number of tokens to generate", type=int - ) - sub.add_argument( - "-t", - "--temperature", - help="""What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. - -Mutually exclusive with `top_p`.""", - type=float, - ) - sub.add_argument( - "-P", - "--top_p", - help="""An alternative to sampling with temperature, called nucleus sampling, where the considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10%% probability mass are considered. - - Mutually exclusive with `temperature`.""", - type=float, - ) - sub.add_argument( - "-n", - "--n", - help="How many sub-completions to generate for each prompt.", - type=int, - ) - sub.add_argument( - "--logprobs", - help="Include the log probabilites on the `logprobs` most likely tokens, as well the chosen tokens. So for example, if `logprobs` is 10, the API will return a list of the 10 most likely tokens. If `logprobs` is 0, only the chosen tokens will have logprobs returned.", - type=int, - ) - sub.add_argument( - "--stop", help="A stop sequence at which to stop generating tokens." - ) - sub.set_defaults(func=Completion.create) - - # Models - sub = subparsers.add_parser("models.list") - sub.set_defaults(func=Model.list) - - sub = subparsers.add_parser("models.get") - sub.add_argument("-i", "--id", required=True, help="The model ID") - sub.set_defaults(func=Model.get) - - sub = subparsers.add_parser("models.delete") - sub.add_argument("-i", "--id", required=True, help="The model ID") - sub.set_defaults(func=Model.delete) - - # Files - sub = subparsers.add_parser("files.create") - - sub.add_argument( - "-f", - "--file", - required=True, - help="File to upload", - ) - sub.add_argument( - "-p", - "--purpose", - help="Why are you uploading this file? (see https://beta.openai.com/docs/api-reference/ for purposes)", - required=True, - ) - sub.add_argument( - "-m", - "--model", - help="Model for search indexing (e.g. 'ada'). Only meaningful if --purpose is 'search'.", - ) - sub.set_defaults(func=File.create) - - sub = subparsers.add_parser("files.get") - sub.add_argument("-i", "--id", required=True, help="The files ID") - sub.set_defaults(func=File.get) - - sub = subparsers.add_parser("files.delete") - sub.add_argument("-i", "--id", required=True, help="The files ID") - sub.set_defaults(func=File.delete) - - sub = subparsers.add_parser("files.list") - sub.set_defaults(func=File.list) - - # Search - sub = subparsers.add_parser("search.create") - - sub.add_argument( - "-d", - "--documents", - help="Documents to search over", - type=str, - nargs="+", - ) - sub.add_argument( - "-q", - "--query", - required=True, - help="Search query", - ) - sub.add_argument( - "-m", - "--model", - help="The model to search with", - ) - sub.set_defaults(func=Search.create) - - # Finetune - sub = subparsers.add_parser("fine_tunes.list") - sub.set_defaults(func=FineTune.list) - - sub = subparsers.add_parser("fine_tunes.create") - sub.add_argument( - "-t", - "--training_file", - required=True, - help="JSONL file containing prompt-completion examples for training. This can " - "be the ID of a file uploaded through the OpenAI API (e.g. file-abcde12345), " - 'a local file path, or a URL that starts with "http".', - ) - sub.add_argument( - "-v", - "--validation_file", - help="JSONL file containing prompt-completion examples for validation. This can " - "be the ID of a file uploaded through the OpenAI API (e.g. file-abcde12345), " - 'a local file path, or a URL that starts with "http".', - ) - sub.add_argument( - "--no_check_if_files_exist", - dest="check_if_files_exist", - action="store_false", - help="If this argument is set and training_file or validation_file are file paths, immediately upload them. If this argument is not set, check if they may be duplicates of already uploaded files before uploading, based on file name and file size.", - ) - sub.add_argument( - "-m", - "--model", - help="The model to start fine-tuning from", - ) - sub.add_argument( - "--suffix", - help="If set, this argument can be used to customize the generated fine-tuned model name." - "All punctuation and whitespace in `suffix` will be replaced with a " - "single dash, and the string will be lower cased. The max " - "length of `suffix` is 40 chars. " - "The generated name will match the form `{base_model}:ft-{org-title}:{suffix}-{timestamp}`. " - 'For example, `openai api fine_tunes.create -t test.jsonl -m ada --suffix "custom model name" ' - "could generate a model with the name " - "ada:ft-your-org:custom-model-name-2022-02-15-04-21-04", - ) - sub.add_argument( - "--no_follow", - action="store_true", - help="If set, returns immediately after creating the job. Otherwise, streams events and waits for the job to complete.", - ) - sub.add_argument( - "--n_epochs", - type=int, - help="The number of epochs to train the model for. An epoch refers to one " - "full cycle through the training dataset.", - ) - sub.add_argument( - "--batch_size", - type=int, - help="The batch size to use for training. The batch size is the number of " - "training examples used to train a single forward and backward pass.", - ) - sub.add_argument( - "--learning_rate_multiplier", - type=float, - help="The learning rate multiplier to use for training. The fine-tuning " - "learning rate is determined by the original learning rate used for " - "pretraining multiplied by this value.", - ) - sub.add_argument( - "--prompt_loss_weight", - type=float, - help="The weight to use for the prompt loss. The optimum value here depends " - "depends on your use case. This determines how much the model prioritizes " - "learning from prompt tokens vs learning from completion tokens.", - ) - sub.add_argument( - "--compute_classification_metrics", - action="store_true", - help="If set, we calculate classification-specific metrics such as accuracy " - "and F-1 score using the validation set at the end of every epoch.", - ) - sub.set_defaults(compute_classification_metrics=None) - sub.add_argument( - "--classification_n_classes", - type=int, - help="The number of classes in a classification task. This parameter is " - "required for multiclass classification.", - ) - sub.add_argument( - "--classification_positive_class", - help="The positive class in binary classification. This parameter is needed " - "to generate precision, recall and F-1 metrics when doing binary " - "classification.", - ) - sub.add_argument( - "--classification_betas", - type=float, - nargs="+", - help="If this is provided, we calculate F-beta scores at the specified beta " - "values. The F-beta score is a generalization of F-1 score. This is only " - "used for binary classification.", - ) - sub.set_defaults(func=FineTune.create) - - sub = subparsers.add_parser("fine_tunes.get") - sub.add_argument("-i", "--id", required=True, help="The id of the fine-tune job") - sub.set_defaults(func=FineTune.get) - - sub = subparsers.add_parser("fine_tunes.results") - sub.add_argument("-i", "--id", required=True, help="The id of the fine-tune job") - sub.set_defaults(func=FineTune.results) - - sub = subparsers.add_parser("fine_tunes.events") - sub.add_argument("-i", "--id", required=True, help="The id of the fine-tune job") - - # TODO(rachel): Remove this in 1.0 - sub.add_argument( - "-s", - "--stream", - action="store_true", - help="[DEPRECATED] If set, events will be streamed until the job is done. Otherwise, " - "displays the event history to date.", - ) - sub.set_defaults(func=FineTune.events) - - sub = subparsers.add_parser("fine_tunes.follow") - sub.add_argument("-i", "--id", required=True, help="The id of the fine-tune job") - sub.set_defaults(func=FineTune.follow) - - sub = subparsers.add_parser("fine_tunes.cancel") - sub.add_argument("-i", "--id", required=True, help="The id of the fine-tune job") - sub.set_defaults(func=FineTune.cancel) - - -def wandb_register(parser): - subparsers = parser.add_subparsers( - title="wandb", help="Logging with Weights & Biases" - ) - - def help(args): - parser.print_help() - - parser.set_defaults(func=help) - - sub = subparsers.add_parser("sync") - sub.add_argument("-i", "--id", help="The id of the fine-tune job (optional)") - sub.add_argument( - "-n", - "--n_fine_tunes", - type=int, - default=None, - help="Number of most recent fine-tunes to log when an id is not provided. By default, every fine-tune is synced.", - ) - sub.add_argument( - "--project", - default="GPT-3", - help="""Name of the project where you're sending runs. By default, it is "GPT-3".""", - ) - sub.add_argument( - "--entity", - help="Username or team name where you're sending runs. By default, your default entity is used, which is usually your username.", - ) - sub.add_argument( - "--force", - action="store_true", - help="Forces logging and overwrite existing wandb run of the same fine-tune.", - ) - sub.set_defaults(force=False) - sub.set_defaults(func=WandbLogger.sync) diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_methods_and_attributes.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_methods_and_attributes.cpp deleted file mode 100644 index 11d4e7b3501a8bb37b829af6c4aa5d4a4e094f8e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_methods_and_attributes.cpp +++ /dev/null @@ -1,372 +0,0 @@ -/* - tests/test_methods_and_attributes.cpp -- constructors, deconstructors, attribute access, - __str__, argument and return value conventions - - Copyright (c) 2016 Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#include "pybind11_tests.h" -#include "constructor_stats.h" - -#if !defined(PYBIND11_OVERLOAD_CAST) -template -using overload_cast_ = pybind11::detail::overload_cast_impl; -#endif - -class ExampleMandA { -public: - ExampleMandA() { print_default_created(this); } - ExampleMandA(int value) : value(value) { print_created(this, value); } - ExampleMandA(const ExampleMandA &e) : value(e.value) { print_copy_created(this); } - ExampleMandA(std::string&&) {} - ExampleMandA(ExampleMandA &&e) : value(e.value) { print_move_created(this); } - ~ExampleMandA() { print_destroyed(this); } - - std::string toString() { - return "ExampleMandA[value=" + std::to_string(value) + "]"; - } - - void operator=(const ExampleMandA &e) { print_copy_assigned(this); value = e.value; } - void operator=(ExampleMandA &&e) { print_move_assigned(this); value = e.value; } - - void add1(ExampleMandA other) { value += other.value; } // passing by value - void add2(ExampleMandA &other) { value += other.value; } // passing by reference - void add3(const ExampleMandA &other) { value += other.value; } // passing by const reference - void add4(ExampleMandA *other) { value += other->value; } // passing by pointer - void add5(const ExampleMandA *other) { value += other->value; } // passing by const pointer - - void add6(int other) { value += other; } // passing by value - void add7(int &other) { value += other; } // passing by reference - void add8(const int &other) { value += other; } // passing by const reference - void add9(int *other) { value += *other; } // passing by pointer - void add10(const int *other) { value += *other; } // passing by const pointer - - void consume_str(std::string&&) {} - - ExampleMandA self1() { return *this; } // return by value - ExampleMandA &self2() { return *this; } // return by reference - const ExampleMandA &self3() { return *this; } // return by const reference - ExampleMandA *self4() { return this; } // return by pointer - const ExampleMandA *self5() { return this; } // return by const pointer - - int internal1() { return value; } // return by value - int &internal2() { return value; } // return by reference - const int &internal3() { return value; } // return by const reference - int *internal4() { return &value; } // return by pointer - const int *internal5() { return &value; } // return by const pointer - - py::str overloaded() { return "()"; } - py::str overloaded(int) { return "(int)"; } - py::str overloaded(int, float) { return "(int, float)"; } - py::str overloaded(float, int) { return "(float, int)"; } - py::str overloaded(int, int) { return "(int, int)"; } - py::str overloaded(float, float) { return "(float, float)"; } - py::str overloaded(int) const { return "(int) const"; } - py::str overloaded(int, float) const { return "(int, float) const"; } - py::str overloaded(float, int) const { return "(float, int) const"; } - py::str overloaded(int, int) const { return "(int, int) const"; } - py::str overloaded(float, float) const { return "(float, float) const"; } - - static py::str overloaded(float) { return "static float"; } - - int value = 0; -}; - -struct TestProperties { - int value = 1; - static int static_value; - - int get() const { return value; } - void set(int v) { value = v; } - - static int static_get() { return static_value; } - static void static_set(int v) { static_value = v; } -}; -int TestProperties::static_value = 1; - -struct TestPropertiesOverride : TestProperties { - int value = 99; - static int static_value; -}; -int TestPropertiesOverride::static_value = 99; - -struct TestPropRVP { - UserType v1{1}; - UserType v2{1}; - static UserType sv1; - static UserType sv2; - - const UserType &get1() const { return v1; } - const UserType &get2() const { return v2; } - UserType get_rvalue() const { return v2; } - void set1(int v) { v1.set(v); } - void set2(int v) { v2.set(v); } -}; -UserType TestPropRVP::sv1(1); -UserType TestPropRVP::sv2(1); - -// Test None-allowed py::arg argument policy -class NoneTester { public: int answer = 42; }; -int none1(const NoneTester &obj) { return obj.answer; } -int none2(NoneTester *obj) { return obj ? obj->answer : -1; } -int none3(std::shared_ptr &obj) { return obj ? obj->answer : -1; } -int none4(std::shared_ptr *obj) { return obj && *obj ? (*obj)->answer : -1; } -int none5(std::shared_ptr obj) { return obj ? obj->answer : -1; } - -struct StrIssue { - int val = -1; - - StrIssue() = default; - StrIssue(int i) : val{i} {} -}; - -// Issues #854, #910: incompatible function args when member function/pointer is in unregistered base class -class UnregisteredBase { -public: - void do_nothing() const {} - void increase_value() { rw_value++; ro_value += 0.25; } - void set_int(int v) { rw_value = v; } - int get_int() const { return rw_value; } - double get_double() const { return ro_value; } - int rw_value = 42; - double ro_value = 1.25; -}; -class RegisteredDerived : public UnregisteredBase { -public: - using UnregisteredBase::UnregisteredBase; - double sum() const { return rw_value + ro_value; } -}; - -// Test explicit lvalue ref-qualification -struct RefQualified { - int value = 0; - - void refQualified(int other) & { value += other; } - int constRefQualified(int other) const & { return value + other; } -}; - -TEST_SUBMODULE(methods_and_attributes, m) { - // test_methods_and_attributes - py::class_ emna(m, "ExampleMandA"); - emna.def(py::init<>()) - .def(py::init()) - .def(py::init()) - .def(py::init()) - .def("add1", &ExampleMandA::add1) - .def("add2", &ExampleMandA::add2) - .def("add3", &ExampleMandA::add3) - .def("add4", &ExampleMandA::add4) - .def("add5", &ExampleMandA::add5) - .def("add6", &ExampleMandA::add6) - .def("add7", &ExampleMandA::add7) - .def("add8", &ExampleMandA::add8) - .def("add9", &ExampleMandA::add9) - .def("add10", &ExampleMandA::add10) - .def("consume_str", &ExampleMandA::consume_str) - .def("self1", &ExampleMandA::self1) - .def("self2", &ExampleMandA::self2) - .def("self3", &ExampleMandA::self3) - .def("self4", &ExampleMandA::self4) - .def("self5", &ExampleMandA::self5) - .def("internal1", &ExampleMandA::internal1) - .def("internal2", &ExampleMandA::internal2) - .def("internal3", &ExampleMandA::internal3) - .def("internal4", &ExampleMandA::internal4) - .def("internal5", &ExampleMandA::internal5) -#if defined(PYBIND11_OVERLOAD_CAST) - .def("overloaded", py::overload_cast<>(&ExampleMandA::overloaded)) - .def("overloaded", py::overload_cast(&ExampleMandA::overloaded)) - .def("overloaded", py::overload_cast(&ExampleMandA::overloaded)) - .def("overloaded", py::overload_cast(&ExampleMandA::overloaded)) - .def("overloaded", py::overload_cast(&ExampleMandA::overloaded)) - .def("overloaded", py::overload_cast(&ExampleMandA::overloaded)) - .def("overloaded_float", py::overload_cast(&ExampleMandA::overloaded)) - .def("overloaded_const", py::overload_cast(&ExampleMandA::overloaded, py::const_)) - .def("overloaded_const", py::overload_cast(&ExampleMandA::overloaded, py::const_)) - .def("overloaded_const", py::overload_cast(&ExampleMandA::overloaded, py::const_)) - .def("overloaded_const", py::overload_cast(&ExampleMandA::overloaded, py::const_)) - .def("overloaded_const", py::overload_cast(&ExampleMandA::overloaded, py::const_)) -#else - // Use both the traditional static_cast method and the C++11 compatible overload_cast_ - .def("overloaded", overload_cast_<>()(&ExampleMandA::overloaded)) - .def("overloaded", overload_cast_()(&ExampleMandA::overloaded)) - .def("overloaded", overload_cast_()(&ExampleMandA::overloaded)) - .def("overloaded", static_cast(&ExampleMandA::overloaded)) - .def("overloaded", static_cast(&ExampleMandA::overloaded)) - .def("overloaded", static_cast(&ExampleMandA::overloaded)) - .def("overloaded_float", overload_cast_()(&ExampleMandA::overloaded)) - .def("overloaded_const", overload_cast_()(&ExampleMandA::overloaded, py::const_)) - .def("overloaded_const", overload_cast_()(&ExampleMandA::overloaded, py::const_)) - .def("overloaded_const", static_cast(&ExampleMandA::overloaded)) - .def("overloaded_const", static_cast(&ExampleMandA::overloaded)) - .def("overloaded_const", static_cast(&ExampleMandA::overloaded)) -#endif - // test_no_mixed_overloads - // Raise error if trying to mix static/non-static overloads on the same name: - .def_static("add_mixed_overloads1", []() { - auto emna = py::reinterpret_borrow>(py::module::import("pybind11_tests.methods_and_attributes").attr("ExampleMandA")); - emna.def ("overload_mixed1", static_cast(&ExampleMandA::overloaded)) - .def_static("overload_mixed1", static_cast(&ExampleMandA::overloaded)); - }) - .def_static("add_mixed_overloads2", []() { - auto emna = py::reinterpret_borrow>(py::module::import("pybind11_tests.methods_and_attributes").attr("ExampleMandA")); - emna.def_static("overload_mixed2", static_cast(&ExampleMandA::overloaded)) - .def ("overload_mixed2", static_cast(&ExampleMandA::overloaded)); - }) - .def("__str__", &ExampleMandA::toString) - .def_readwrite("value", &ExampleMandA::value); - - // test_copy_method - // Issue #443: can't call copied methods in Python 3 - emna.attr("add2b") = emna.attr("add2"); - - // test_properties, test_static_properties, test_static_cls - py::class_(m, "TestProperties") - .def(py::init<>()) - .def_readonly("def_readonly", &TestProperties::value) - .def_readwrite("def_readwrite", &TestProperties::value) - .def_property("def_writeonly", nullptr, - [](TestProperties& s,int v) { s.value = v; } ) - .def_property("def_property_writeonly", nullptr, &TestProperties::set) - .def_property_readonly("def_property_readonly", &TestProperties::get) - .def_property("def_property", &TestProperties::get, &TestProperties::set) - .def_property("def_property_impossible", nullptr, nullptr) - .def_readonly_static("def_readonly_static", &TestProperties::static_value) - .def_readwrite_static("def_readwrite_static", &TestProperties::static_value) - .def_property_static("def_writeonly_static", nullptr, - [](py::object, int v) { TestProperties::static_value = v; }) - .def_property_readonly_static("def_property_readonly_static", - [](py::object) { return TestProperties::static_get(); }) - .def_property_static("def_property_writeonly_static", nullptr, - [](py::object, int v) { return TestProperties::static_set(v); }) - .def_property_static("def_property_static", - [](py::object) { return TestProperties::static_get(); }, - [](py::object, int v) { TestProperties::static_set(v); }) - .def_property_static("static_cls", - [](py::object cls) { return cls; }, - [](py::object cls, py::function f) { f(cls); }); - - py::class_(m, "TestPropertiesOverride") - .def(py::init<>()) - .def_readonly("def_readonly", &TestPropertiesOverride::value) - .def_readonly_static("def_readonly_static", &TestPropertiesOverride::static_value); - - auto static_get1 = [](py::object) -> const UserType & { return TestPropRVP::sv1; }; - auto static_get2 = [](py::object) -> const UserType & { return TestPropRVP::sv2; }; - auto static_set1 = [](py::object, int v) { TestPropRVP::sv1.set(v); }; - auto static_set2 = [](py::object, int v) { TestPropRVP::sv2.set(v); }; - auto rvp_copy = py::return_value_policy::copy; - - // test_property_return_value_policies - py::class_(m, "TestPropRVP") - .def(py::init<>()) - .def_property_readonly("ro_ref", &TestPropRVP::get1) - .def_property_readonly("ro_copy", &TestPropRVP::get2, rvp_copy) - .def_property_readonly("ro_func", py::cpp_function(&TestPropRVP::get2, rvp_copy)) - .def_property("rw_ref", &TestPropRVP::get1, &TestPropRVP::set1) - .def_property("rw_copy", &TestPropRVP::get2, &TestPropRVP::set2, rvp_copy) - .def_property("rw_func", py::cpp_function(&TestPropRVP::get2, rvp_copy), &TestPropRVP::set2) - .def_property_readonly_static("static_ro_ref", static_get1) - .def_property_readonly_static("static_ro_copy", static_get2, rvp_copy) - .def_property_readonly_static("static_ro_func", py::cpp_function(static_get2, rvp_copy)) - .def_property_static("static_rw_ref", static_get1, static_set1) - .def_property_static("static_rw_copy", static_get2, static_set2, rvp_copy) - .def_property_static("static_rw_func", py::cpp_function(static_get2, rvp_copy), static_set2) - // test_property_rvalue_policy - .def_property_readonly("rvalue", &TestPropRVP::get_rvalue) - .def_property_readonly_static("static_rvalue", [](py::object) { return UserType(1); }); - - // test_metaclass_override - struct MetaclassOverride { }; - py::class_(m, "MetaclassOverride", py::metaclass((PyObject *) &PyType_Type)) - .def_property_readonly_static("readonly", [](py::object) { return 1; }); - -#if !defined(PYPY_VERSION) - // test_dynamic_attributes - class DynamicClass { - public: - DynamicClass() { print_default_created(this); } - DynamicClass(const DynamicClass&) = delete; - ~DynamicClass() { print_destroyed(this); } - }; - py::class_(m, "DynamicClass", py::dynamic_attr()) - .def(py::init()); - - class CppDerivedDynamicClass : public DynamicClass { }; - py::class_(m, "CppDerivedDynamicClass") - .def(py::init()); -#endif - - // test_bad_arg_default - // Issue/PR #648: bad arg default debugging output -#if !defined(NDEBUG) - m.attr("debug_enabled") = true; -#else - m.attr("debug_enabled") = false; -#endif - m.def("bad_arg_def_named", []{ - auto m = py::module::import("pybind11_tests"); - m.def("should_fail", [](int, UnregisteredType) {}, py::arg(), py::arg("a") = UnregisteredType()); - }); - m.def("bad_arg_def_unnamed", []{ - auto m = py::module::import("pybind11_tests"); - m.def("should_fail", [](int, UnregisteredType) {}, py::arg(), py::arg() = UnregisteredType()); - }); - - // test_accepts_none - py::class_>(m, "NoneTester") - .def(py::init<>()); - m.def("no_none1", &none1, py::arg().none(false)); - m.def("no_none2", &none2, py::arg().none(false)); - m.def("no_none3", &none3, py::arg().none(false)); - m.def("no_none4", &none4, py::arg().none(false)); - m.def("no_none5", &none5, py::arg().none(false)); - m.def("ok_none1", &none1); - m.def("ok_none2", &none2, py::arg().none(true)); - m.def("ok_none3", &none3); - m.def("ok_none4", &none4, py::arg().none(true)); - m.def("ok_none5", &none5); - - // test_str_issue - // Issue #283: __str__ called on uninitialized instance when constructor arguments invalid - py::class_(m, "StrIssue") - .def(py::init()) - .def(py::init<>()) - .def("__str__", [](const StrIssue &si) { - return "StrIssue[" + std::to_string(si.val) + "]"; } - ); - - // test_unregistered_base_implementations - // - // Issues #854/910: incompatible function args when member function/pointer is in unregistered - // base class The methods and member pointers below actually resolve to members/pointers in - // UnregisteredBase; before this test/fix they would be registered via lambda with a first - // argument of an unregistered type, and thus uncallable. - py::class_(m, "RegisteredDerived") - .def(py::init<>()) - .def("do_nothing", &RegisteredDerived::do_nothing) - .def("increase_value", &RegisteredDerived::increase_value) - .def_readwrite("rw_value", &RegisteredDerived::rw_value) - .def_readonly("ro_value", &RegisteredDerived::ro_value) - // These should trigger a static_assert if uncommented - //.def_readwrite("fails", &UserType::value) // should trigger a static_assert if uncommented - //.def_readonly("fails", &UserType::value) // should trigger a static_assert if uncommented - .def_property("rw_value_prop", &RegisteredDerived::get_int, &RegisteredDerived::set_int) - .def_property_readonly("ro_value_prop", &RegisteredDerived::get_double) - // This one is in the registered class: - .def("sum", &RegisteredDerived::sum) - ; - - using Adapted = decltype(py::method_adaptor(&RegisteredDerived::do_nothing)); - static_assert(std::is_same::value, ""); - - // test_methods_and_attributes - py::class_(m, "RefQualified") - .def(py::init<>()) - .def_readonly("value", &RefQualified::value) - .def("refQualified", &RefQualified::refQualified) - .def("constRefQualified", &RefQualified::constRefQualified); -} diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/math_private.h b/spaces/CVPR/LIVE/thrust/thrust/detail/complex/math_private.h deleted file mode 100644 index bc2d6357f2c169ee7e4e60f466dc09f4ed4b30d2..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/math_private.h +++ /dev/null @@ -1,136 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * Copyright 2013 Filipe RNC Maia - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/* - * ==================================================== - * Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved. - * - * Developed at SunPro, a Sun Microsystems, Inc. business. - * Permission to use, copy, modify, and distribute this - * software is freely granted, provided that this notice - * is preserved. - * ==================================================== - */ - -/* adapted from FreeBSD: - * lib/msun/src/math_private.h - */ -#pragma once - -#include -#include -#include - -namespace thrust{ -namespace detail{ -namespace complex{ - -using thrust::complex; - -typedef union -{ - float value; - uint32_t word; -} ieee_float_shape_type; - -__host__ __device__ -inline void get_float_word(uint32_t & i, float d){ - ieee_float_shape_type gf_u; - gf_u.value = (d); - (i) = gf_u.word; -} - -__host__ __device__ -inline void get_float_word(int32_t & i, float d){ - ieee_float_shape_type gf_u; - gf_u.value = (d); - (i) = gf_u.word; -} - -__host__ __device__ -inline void set_float_word(float & d, uint32_t i){ - ieee_float_shape_type sf_u; - sf_u.word = (i); - (d) = sf_u.value; -} - -// Assumes little endian ordering -typedef union -{ - double value; - struct - { - uint32_t lsw; - uint32_t msw; - } parts; - struct - { - uint64_t w; - } xparts; -} ieee_double_shape_type; - -__host__ __device__ inline -void get_high_word(uint32_t & i,double d){ - ieee_double_shape_type gh_u; - gh_u.value = (d); - (i) = gh_u.parts.msw; -} - -/* Set the more significant 32 bits of a double from an int. */ -__host__ __device__ inline -void set_high_word(double & d, uint32_t v){ - ieee_double_shape_type sh_u; - sh_u.value = (d); - sh_u.parts.msw = (v); - (d) = sh_u.value; -} - - -__host__ __device__ inline -void insert_words(double & d, uint32_t ix0, uint32_t ix1){ - ieee_double_shape_type iw_u; - iw_u.parts.msw = (ix0); - iw_u.parts.lsw = (ix1); - (d) = iw_u.value; -} - -/* Get two 32 bit ints from a double. */ -__host__ __device__ inline -void extract_words(uint32_t & ix0,uint32_t & ix1, double d){ - ieee_double_shape_type ew_u; - ew_u.value = (d); - (ix0) = ew_u.parts.msw; - (ix1) = ew_u.parts.lsw; -} - -/* Get two 32 bit ints from a double. */ -__host__ __device__ inline -void extract_words(int32_t & ix0,int32_t & ix1, double d){ - ieee_double_shape_type ew_u; - ew_u.value = (d); - (ix0) = ew_u.parts.msw; - (ix1) = ew_u.parts.lsw; -} - -} // namespace complex - -} // namespace detail - -} // namespace thrust - - -#include diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/functional/operators/arithmetic_operators.h b/spaces/CVPR/LIVE/thrust/thrust/detail/functional/operators/arithmetic_operators.h deleted file mode 100644 index bd5b707e3ba163d7308b3d893a4f4b773af1933f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/functional/operators/arithmetic_operators.h +++ /dev/null @@ -1,432 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace detail -{ -namespace functional -{ - -template -__host__ __device__ -actor< - composite< - transparent_unary_operator>, - actor - > -> -__host__ __device__ -operator-(const actor &_1) -{ - return compose(transparent_unary_operator>(), _1); -} // end operator-() - -// there's no standard unary_plus functional, so roll an ad hoc one here -struct unary_plus -{ - using is_transparent = void; - - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr auto operator()(T1&& t1) const - noexcept(noexcept(+THRUST_FWD(t1))) -> decltype(+THRUST_FWD(t1)) - { - return +THRUST_FWD(t1); - } -}; - -template -__host__ __device__ -actor< - composite< - transparent_unary_operator, - actor - > -> -operator+(const actor &_1) -{ - return compose(transparent_unary_operator(), _1); -} // end operator+() - -template -__host__ __device__ -actor< - composite< - transparent_binary_operator>, - actor, - typename as_actor::type - > -> -operator+(const actor &_1, const T2 &_2) -{ - return compose(transparent_binary_operator>(), - make_actor(_1), - make_actor(_2)); -} // end operator+() - -template -__host__ __device__ -actor< - composite< - transparent_binary_operator>, - typename as_actor::type, - actor - > -> -operator+(const T1 &_1, const actor &_2) -{ - return compose(transparent_binary_operator>(), - make_actor(_1), - make_actor(_2)); -} // end operator+() - -template -__host__ __device__ -actor< - composite< - transparent_binary_operator>, - actor, - actor - > -> -operator+(const actor &_1, const actor &_2) -{ - return compose(transparent_binary_operator>(), - make_actor(_1), - make_actor(_2)); -} // end operator+() - -template -__host__ __device__ -actor< - composite< - transparent_binary_operator>, - typename as_actor::type, - actor - > -> -operator-(const T1 &_1, const actor &_2) -{ - return compose(transparent_binary_operator>(), - make_actor(_1), - make_actor(_2)); -} // end operator-() - -template -__host__ __device__ -actor< - composite< - transparent_binary_operator>, - actor, - typename as_actor::type - > -> -operator-(const actor &_1, const T2 &_2) -{ - return compose(transparent_binary_operator>(), - make_actor(_1), - make_actor(_2)); -} // end operator-() - -template -__host__ __device__ -actor< - composite< - transparent_binary_operator>, - actor, - actor - > -> -operator-(const actor &_1, const actor &_2) -{ - return compose(transparent_binary_operator>(), - make_actor(_1), - make_actor(_2)); -} // end operator-() - -template -__host__ __device__ -actor< - composite< - transparent_binary_operator>, - typename as_actor::type, - actor - > -> -operator*(const T1 &_1, const actor &_2) -{ - return compose(transparent_binary_operator>(), - make_actor(_1), - make_actor(_2)); -} // end operator*() - -template -__host__ __device__ -actor< - composite< - transparent_binary_operator>, - actor, - typename as_actor::type - > -> -operator*(const actor &_1, const T2 &_2) -{ - return compose(transparent_binary_operator>(), - make_actor(_1), - make_actor(_2)); -} // end operator*() - -template -__host__ __device__ -actor< - composite< - transparent_binary_operator>, - actor, - actor - > -> -operator*(const actor &_1, const actor &_2) -{ - return compose(transparent_binary_operator>(), - make_actor(_1), - make_actor(_2)); -} // end operator*() - -template -__host__ __device__ -actor< - composite< - transparent_binary_operator>, - actor, - typename as_actor::type - > -> -operator/(const actor &_1, const T2 &_2) -{ - return compose(transparent_binary_operator>(), - make_actor(_1), - make_actor(_2)); -} // end operator/() - -template -__host__ __device__ -actor< - composite< - transparent_binary_operator>, - typename as_actor::type, - actor - > -> -operator/(const T1 &_1, const actor &_2) -{ - return compose(transparent_binary_operator>(), - make_actor(_1), - make_actor(_2)); -} // end operator/() - -template -__host__ __device__ -actor< - composite< - transparent_binary_operator>, - actor, - actor - > -> -operator/(const actor &_1, const actor &_2) -{ - return compose(transparent_binary_operator>(), - make_actor(_1), - make_actor(_2)); -} // end operator/() - -template -__host__ __device__ -actor< - composite< - transparent_binary_operator>, - actor, - typename as_actor::type - > -> -operator%(const actor &_1, const T2 &_2) -{ - return compose(transparent_binary_operator>(), - make_actor(_1), - make_actor(_2)); -} // end operator%() - -template -__host__ __device__ -actor< - composite< - transparent_binary_operator>, - typename as_actor::type, - actor - > -> -operator%(const T1 &_1, const actor &_2) -{ - return compose(transparent_binary_operator>(), - make_actor(_1), - make_actor(_2)); -} // end operator%() - -template -__host__ __device__ -actor< - composite< - transparent_binary_operator>, - actor, - actor - > -> -operator%(const actor &_1, const actor &_2) -{ - return compose(transparent_binary_operator>(), - make_actor(_1), - make_actor(_2)); -} // end operator%() - -// there's no standard prefix_increment functional, so roll an ad hoc one here -struct prefix_increment -{ - using is_transparent = void; - - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr auto operator()(T1&& t1) const - noexcept(noexcept(++THRUST_FWD(t1))) -> decltype(++THRUST_FWD(t1)) - { - return ++THRUST_FWD(t1); - } -}; // end prefix_increment - -template -__host__ __device__ -actor< - composite< - transparent_unary_operator, - actor - > -> -operator++(const actor &_1) -{ - return compose(transparent_unary_operator(), _1); -} // end operator++() - - -// there's no standard postfix_increment functional, so roll an ad hoc one here -struct postfix_increment -{ - using is_transparent = void; - - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr auto operator()(T1&& t1) const - noexcept(noexcept(THRUST_FWD(t1)++)) -> decltype(THRUST_FWD(t1)++) - { - return THRUST_FWD(t1)++; - } -}; // end postfix_increment - -template -__host__ __device__ -actor< - composite< - transparent_unary_operator, - actor - > -> -operator++(const actor &_1, int) -{ - return compose(transparent_unary_operator(), _1); -} // end operator++() - - -// there's no standard prefix_decrement functional, so roll an ad hoc one here -struct prefix_decrement -{ - using is_transparent = void; - - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr auto operator()(T1&& t1) const - noexcept(noexcept(--THRUST_FWD(t1))) -> decltype(--THRUST_FWD(t1)) - { - return --THRUST_FWD(t1); - } -}; // end prefix_decrement - -template -__host__ __device__ -actor< - composite< - transparent_unary_operator, - actor - > -> -operator--(const actor &_1) -{ - return compose(transparent_unary_operator(), _1); -} // end operator--() - - -// there's no standard postfix_decrement functional, so roll an ad hoc one here -struct postfix_decrement -{ - using is_transparent = void; - - __thrust_exec_check_disable__ - template - __host__ __device__ - constexpr auto operator()(T1&& t1) const - noexcept(noexcept(THRUST_FWD(t1)--)) -> decltype(THRUST_FWD(t1)--) - { - return THRUST_FWD(t1)--; - } -}; // end prefix_increment - -template -__host__ __device__ -actor< - composite< - transparent_unary_operator, - actor - > -> -operator--(const actor &_1, int) -{ - return compose(transparent_unary_operator(), _1); -} // end operator--() - -} // end functional -} // end detail -} // end thrust - diff --git a/spaces/CVPR/Text2Human/Text2Human/models/archs/vqgan_arch.py b/spaces/CVPR/Text2Human/Text2Human/models/archs/vqgan_arch.py deleted file mode 100644 index 51980ec048dc25e5c84ae26ba6bde384d1d2a94f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Text2Human/Text2Human/models/archs/vqgan_arch.py +++ /dev/null @@ -1,1203 +0,0 @@ -# pytorch_diffusion + derived encoder decoder -import math -from urllib.request import proxy_bypass - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from einops import rearrange - - -class VectorQuantizer(nn.Module): - """ - Improved version over VectorQuantizer, can be used as a drop-in replacement. Mostly - avoids costly matrix multiplications and allows for post-hoc remapping of indices. - """ - - # NOTE: due to a bug the beta term was applied to the wrong term. for - # backwards compatibility we use the buggy version by default, but you can - # specify legacy=False to fix it. - def __init__(self, - n_e, - e_dim, - beta, - remap=None, - unknown_index="random", - sane_index_shape=False, - legacy=True): - super().__init__() - self.n_e = n_e - self.e_dim = e_dim - self.beta = beta - self.legacy = legacy - - self.embedding = nn.Embedding(self.n_e, self.e_dim) - self.embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e) - - self.remap = remap - if self.remap is not None: - self.register_buffer("used", torch.tensor(np.load(self.remap))) - self.re_embed = self.used.shape[0] - self.unknown_index = unknown_index # "random" or "extra" or integer - if self.unknown_index == "extra": - self.unknown_index = self.re_embed - self.re_embed = self.re_embed + 1 - print(f"Remapping {self.n_e} indices to {self.re_embed} indices. " - f"Using {self.unknown_index} for unknown indices.") - else: - self.re_embed = n_e - - self.sane_index_shape = sane_index_shape - - def remap_to_used(self, inds): - ishape = inds.shape - assert len(ishape) > 1 - inds = inds.reshape(ishape[0], -1) - used = self.used.to(inds) - match = (inds[:, :, None] == used[None, None, ...]).long() - new = match.argmax(-1) - unknown = match.sum(2) < 1 - if self.unknown_index == "random": - new[unknown] = torch.randint( - 0, self.re_embed, - size=new[unknown].shape).to(device=new.device) - else: - new[unknown] = self.unknown_index - return new.reshape(ishape) - - def unmap_to_all(self, inds): - ishape = inds.shape - assert len(ishape) > 1 - inds = inds.reshape(ishape[0], -1) - used = self.used.to(inds) - if self.re_embed > self.used.shape[0]: # extra token - inds[inds >= self.used.shape[0]] = 0 # simply set to zero - back = torch.gather(used[None, :][inds.shape[0] * [0], :], 1, inds) - return back.reshape(ishape) - - def forward(self, z, temp=None, rescale_logits=False, return_logits=False): - assert temp is None or temp == 1.0, "Only for interface compatible with Gumbel" - assert rescale_logits == False, "Only for interface compatible with Gumbel" - assert return_logits == False, "Only for interface compatible with Gumbel" - # reshape z -> (batch, height, width, channel) and flatten - z = rearrange(z, 'b c h w -> b h w c').contiguous() - z_flattened = z.view(-1, self.e_dim) - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - - d = torch.sum(z_flattened ** 2, dim=1, keepdim=True) + \ - torch.sum(self.embedding.weight**2, dim=1) - 2 * \ - torch.einsum('bd,dn->bn', z_flattened, rearrange(self.embedding.weight, 'n d -> d n')) - - min_encoding_indices = torch.argmin(d, dim=1) - z_q = self.embedding(min_encoding_indices).view(z.shape) - perplexity = None - min_encodings = None - - # compute loss for embedding - if not self.legacy: - loss = self.beta * torch.mean((z_q.detach()-z)**2) + \ - torch.mean((z_q - z.detach()) ** 2) - else: - loss = torch.mean((z_q.detach()-z)**2) + self.beta * \ - torch.mean((z_q - z.detach()) ** 2) - - # preserve gradients - z_q = z + (z_q - z).detach() - - # reshape back to match original input shape - z_q = rearrange(z_q, 'b h w c -> b c h w').contiguous() - - if self.remap is not None: - min_encoding_indices = min_encoding_indices.reshape( - z.shape[0], -1) # add batch axis - min_encoding_indices = self.remap_to_used(min_encoding_indices) - min_encoding_indices = min_encoding_indices.reshape(-1, - 1) # flatten - - if self.sane_index_shape: - min_encoding_indices = min_encoding_indices.reshape( - z_q.shape[0], z_q.shape[2], z_q.shape[3]) - - return z_q, loss, (perplexity, min_encodings, min_encoding_indices) - - def get_codebook_entry(self, indices, shape): - # shape specifying (batch, height, width, channel) - if self.remap is not None: - indices = indices.reshape(shape[0], -1) # add batch axis - indices = self.unmap_to_all(indices) - indices = indices.reshape(-1) # flatten again - - # get quantized latent vectors - z_q = self.embedding(indices) - - if shape is not None: - z_q = z_q.view(shape) - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q - - -class VectorQuantizerTexture(nn.Module): - """ - Improved version over VectorQuantizer, can be used as a drop-in replacement. Mostly - avoids costly matrix multiplications and allows for post-hoc remapping of indices. - """ - - # NOTE: due to a bug the beta term was applied to the wrong term. for - # backwards compatibility we use the buggy version by default, but you can - # specify legacy=False to fix it. - def __init__(self, - n_e, - e_dim, - beta, - remap=None, - unknown_index="random", - sane_index_shape=False, - legacy=True): - super().__init__() - self.n_e = n_e - self.e_dim = e_dim - self.beta = beta - self.legacy = legacy - - # TODO: decide number of embeddings - self.embedding_list = nn.ModuleList( - [nn.Embedding(self.n_e, self.e_dim) for i in range(18)]) - for embedding in self.embedding_list: - embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e) - - self.remap = remap - if self.remap is not None: - self.register_buffer("used", torch.tensor(np.load(self.remap))) - self.re_embed = self.used.shape[0] - self.unknown_index = unknown_index # "random" or "extra" or integer - if self.unknown_index == "extra": - self.unknown_index = self.re_embed - self.re_embed = self.re_embed + 1 - print(f"Remapping {self.n_e} indices to {self.re_embed} indices. " - f"Using {self.unknown_index} for unknown indices.") - else: - self.re_embed = n_e - - self.sane_index_shape = sane_index_shape - - def remap_to_used(self, inds): - ishape = inds.shape - assert len(ishape) > 1 - inds = inds.reshape(ishape[0], -1) - used = self.used.to(inds) - match = (inds[:, :, None] == used[None, None, ...]).long() - new = match.argmax(-1) - unknown = match.sum(2) < 1 - if self.unknown_index == "random": - new[unknown] = torch.randint( - 0, self.re_embed, - size=new[unknown].shape).to(device=new.device) - else: - new[unknown] = self.unknown_index - return new.reshape(ishape) - - def unmap_to_all(self, inds): - ishape = inds.shape - assert len(ishape) > 1 - inds = inds.reshape(ishape[0], -1) - used = self.used.to(inds) - if self.re_embed > self.used.shape[0]: # extra token - inds[inds >= self.used.shape[0]] = 0 # simply set to zero - back = torch.gather(used[None, :][inds.shape[0] * [0], :], 1, inds) - return back.reshape(ishape) - - def forward(self, - z, - segm_map, - temp=None, - rescale_logits=False, - return_logits=False): - assert temp is None or temp == 1.0, "Only for interface compatible with Gumbel" - assert rescale_logits == False, "Only for interface compatible with Gumbel" - assert return_logits == False, "Only for interface compatible with Gumbel" - - segm_map = F.interpolate(segm_map, size=z.size()[2:], mode='nearest') - # reshape z -> (batch, height, width, channel) and flatten - z = rearrange(z, 'b c h w -> b h w c').contiguous() - z_flattened = z.view(-1, self.e_dim) - - # flatten segm_map (b, h, w) - segm_map_flatten = segm_map.view(-1) - - z_q = torch.zeros_like(z_flattened) - min_encoding_indices_list = [] - min_encoding_indices_continual = torch.full( - segm_map_flatten.size(), - fill_value=-1, - dtype=torch.long, - device=segm_map_flatten.device) - for codebook_idx in range(18): - min_encoding_indices = torch.full( - segm_map_flatten.size(), - fill_value=-1, - dtype=torch.long, - device=segm_map_flatten.device) - if torch.sum(segm_map_flatten == codebook_idx) > 0: - z_selected = z_flattened[segm_map_flatten == codebook_idx] - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - d_selected = torch.sum( - z_selected**2, dim=1, keepdim=True) + torch.sum( - self.embedding_list[codebook_idx].weight**2, - dim=1) - 2 * torch.einsum( - 'bd,dn->bn', z_selected, - rearrange(self.embedding_list[codebook_idx].weight, - 'n d -> d n')) - min_encoding_indices_selected = torch.argmin(d_selected, dim=1) - z_q_selected = self.embedding_list[codebook_idx]( - min_encoding_indices_selected) - z_q[segm_map_flatten == codebook_idx] = z_q_selected - min_encoding_indices[ - segm_map_flatten == - codebook_idx] = min_encoding_indices_selected - min_encoding_indices_continual[ - segm_map_flatten == - codebook_idx] = min_encoding_indices_selected + 1024 * codebook_idx - min_encoding_indices = min_encoding_indices.reshape( - z.shape[0], z.shape[1], z.shape[2]) - min_encoding_indices_list.append(min_encoding_indices) - - min_encoding_indices_continual = min_encoding_indices_continual.reshape( - z.shape[0], z.shape[1], z.shape[2]) - z_q = z_q.view(z.shape) - perplexity = None - - # compute loss for embedding - if not self.legacy: - loss = self.beta * torch.mean((z_q.detach()-z)**2) + \ - torch.mean((z_q - z.detach()) ** 2) - else: - loss = torch.mean((z_q.detach()-z)**2) + self.beta * \ - torch.mean((z_q - z.detach()) ** 2) - - # preserve gradients - z_q = z + (z_q - z).detach() - - # reshape back to match original input shape - z_q = rearrange(z_q, 'b h w c -> b c h w').contiguous() - - return z_q, loss, (perplexity, min_encoding_indices_continual, - min_encoding_indices_list) - - def get_codebook_entry(self, indices_list, segm_map, shape): - # flatten segm_map (b, h, w) - segm_map = F.interpolate( - segm_map, size=(shape[1], shape[2]), mode='nearest') - segm_map_flatten = segm_map.view(-1) - - z_q = torch.zeros((shape[0] * shape[1] * shape[2]), - self.e_dim).to(segm_map.device) - for codebook_idx in range(18): - if torch.sum(segm_map_flatten == codebook_idx) > 0: - min_encoding_indices_selected = indices_list[ - codebook_idx].view(-1)[segm_map_flatten == codebook_idx] - z_q_selected = self.embedding_list[codebook_idx]( - min_encoding_indices_selected) - z_q[segm_map_flatten == codebook_idx] = z_q_selected - - z_q = z_q.view(shape) - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q - - -def sample_patches(inputs, patch_size=3, stride=1): - """Extract sliding local patches from an input feature tensor. - The sampled pathes are row-major. - Args: - inputs (Tensor): the input feature maps, shape: (n, c, h, w). - patch_size (int): the spatial size of sampled patches. Default: 3. - stride (int): the stride of sampling. Default: 1. - Returns: - patches (Tensor): extracted patches, shape: (n, c * patch_size * - patch_size, n_patches). - """ - - patches = F.unfold(inputs, (patch_size, patch_size), stride=stride) - - return patches - - -class VectorQuantizerSpatialTextureAware(nn.Module): - """ - Improved version over VectorQuantizer, can be used as a drop-in replacement. Mostly - avoids costly matrix multiplications and allows for post-hoc remapping of indices. - """ - - # NOTE: due to a bug the beta term was applied to the wrong term. for - # backwards compatibility we use the buggy version by default, but you can - # specify legacy=False to fix it. - def __init__(self, - n_e, - e_dim, - beta, - spatial_size, - remap=None, - unknown_index="random", - sane_index_shape=False, - legacy=True): - super().__init__() - self.n_e = n_e - self.e_dim = e_dim * spatial_size * spatial_size - self.beta = beta - self.legacy = legacy - self.spatial_size = spatial_size - - # TODO: decide number of embeddings - self.embedding_list = nn.ModuleList( - [nn.Embedding(self.n_e, self.e_dim) for i in range(18)]) - for embedding in self.embedding_list: - embedding.weight.data.uniform_(-1.0 / self.n_e, 1.0 / self.n_e) - - self.remap = remap - if self.remap is not None: - self.register_buffer("used", torch.tensor(np.load(self.remap))) - self.re_embed = self.used.shape[0] - self.unknown_index = unknown_index # "random" or "extra" or integer - if self.unknown_index == "extra": - self.unknown_index = self.re_embed - self.re_embed = self.re_embed + 1 - print(f"Remapping {self.n_e} indices to {self.re_embed} indices. " - f"Using {self.unknown_index} for unknown indices.") - else: - self.re_embed = n_e - - self.sane_index_shape = sane_index_shape - - def forward(self, - z, - segm_map, - temp=None, - rescale_logits=False, - return_logits=False): - assert temp is None or temp == 1.0, "Only for interface compatible with Gumbel" - assert rescale_logits == False, "Only for interface compatible with Gumbel" - assert return_logits == False, "Only for interface compatible with Gumbel" - - segm_map = F.interpolate( - segm_map, - size=(z.size(2) // self.spatial_size, - z.size(3) // self.spatial_size), - mode='nearest') - - # reshape z -> (batch, height, width, channel) and flatten - # z = rearrange(z, 'b c h w -> b h w c').contiguous() ? - z_patches = sample_patches( - z, patch_size=self.spatial_size, - stride=self.spatial_size).permute(0, 2, 1) - z_patches_flattened = z_patches.reshape(-1, self.e_dim) - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - - # flatten segm_map (b, h, w) - segm_map_flatten = segm_map.view(-1) - - z_q = torch.zeros_like(z_patches_flattened) - min_encoding_indices_list = [] - min_encoding_indices_continual = torch.full( - segm_map_flatten.size(), - fill_value=-1, - dtype=torch.long, - device=segm_map_flatten.device) - - for codebook_idx in range(18): - min_encoding_indices = torch.full( - segm_map_flatten.size(), - fill_value=-1, - dtype=torch.long, - device=segm_map_flatten.device) - if torch.sum(segm_map_flatten == codebook_idx) > 0: - z_selected = z_patches_flattened[segm_map_flatten == - codebook_idx] - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - d_selected = torch.sum( - z_selected**2, dim=1, keepdim=True) + torch.sum( - self.embedding_list[codebook_idx].weight**2, - dim=1) - 2 * torch.einsum( - 'bd,dn->bn', z_selected, - rearrange(self.embedding_list[codebook_idx].weight, - 'n d -> d n')) - min_encoding_indices_selected = torch.argmin(d_selected, dim=1) - z_q_selected = self.embedding_list[codebook_idx]( - min_encoding_indices_selected) - z_q[segm_map_flatten == codebook_idx] = z_q_selected - min_encoding_indices[ - segm_map_flatten == - codebook_idx] = min_encoding_indices_selected - min_encoding_indices_continual[ - segm_map_flatten == - codebook_idx] = min_encoding_indices_selected + self.n_e * codebook_idx - min_encoding_indices = min_encoding_indices.reshape( - z_patches.shape[0], segm_map.shape[2], segm_map.shape[3]) - min_encoding_indices_list.append(min_encoding_indices) - - z_q = F.fold( - z_q.view(z_patches.shape).permute(0, 2, 1), - z.size()[2:], - kernel_size=(self.spatial_size, self.spatial_size), - stride=self.spatial_size) - - perplexity = None - - # compute loss for embedding - if not self.legacy: - loss = self.beta * torch.mean((z_q.detach()-z)**2) + \ - torch.mean((z_q - z.detach()) ** 2) - else: - loss = torch.mean((z_q.detach()-z)**2) + self.beta * \ - torch.mean((z_q - z.detach()) ** 2) - - # preserve gradients - z_q = z + (z_q - z).detach() - - return z_q, loss, (perplexity, min_encoding_indices_continual, - min_encoding_indices_list) - - def get_codebook_entry(self, indices_list, segm_map, shape): - # flatten segm_map (b, h, w) - segm_map = F.interpolate( - segm_map, size=(shape[1], shape[2]), mode='nearest') - segm_map_flatten = segm_map.view(-1) - - z_q = torch.zeros((shape[0] * shape[1] * shape[2]), - self.e_dim).to(segm_map.device) - for codebook_idx in range(18): - if torch.sum(segm_map_flatten == codebook_idx) > 0: - min_encoding_indices_selected = indices_list[ - codebook_idx].view(-1)[segm_map_flatten == codebook_idx] - z_q_selected = self.embedding_list[codebook_idx]( - min_encoding_indices_selected) - z_q[segm_map_flatten == codebook_idx] = z_q_selected - - z_q = F.fold( - z_q.view(((shape[0], shape[1] * shape[2], - self.e_dim))).permute(0, 2, 1), - (shape[1] * self.spatial_size, shape[2] * self.spatial_size), - kernel_size=(self.spatial_size, self.spatial_size), - stride=self.spatial_size) - - return z_q - - -def get_timestep_embedding(timesteps, embedding_dim): - """ - This matches the implementation in Denoising Diffusion Probabilistic Models: - From Fairseq. - Build sinusoidal embeddings. - This matches the implementation in tensor2tensor, but differs slightly - from the description in Section 3.5 of "Attention Is All You Need". - """ - assert len(timesteps.shape) == 1 - - half_dim = embedding_dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb) - emb = emb.to(device=timesteps.device) - emb = timesteps.float()[:, None] * emb[None, :] - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) - if embedding_dim % 2 == 1: # zero pad - emb = torch.nn.functional.pad(emb, (0, 1, 0, 0)) - return emb - - -def nonlinearity(x): - # swish - return x * torch.sigmoid(x) - - -def Normalize(in_channels): - return torch.nn.GroupNorm( - num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -class Upsample(nn.Module): - - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - self.conv = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=3, stride=1, padding=1) - - def forward(self, x): - x = torch.nn.functional.interpolate( - x, scale_factor=2.0, mode="nearest") - if self.with_conv: - x = self.conv(x) - return x - - -class Downsample(nn.Module): - - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=3, stride=2, padding=0) - - def forward(self, x): - if self.with_conv: - pad = (0, 1, 0, 1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - else: - x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2) - return x - - -class ResnetBlock(nn.Module): - - def __init__(self, - *, - in_channels, - out_channels=None, - conv_shortcut=False, - dropout, - temb_channels=512): - super().__init__() - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.use_conv_shortcut = conv_shortcut - - self.norm1 = Normalize(in_channels) - self.conv1 = torch.nn.Conv2d( - in_channels, out_channels, kernel_size=3, stride=1, padding=1) - if temb_channels > 0: - self.temb_proj = torch.nn.Linear(temb_channels, out_channels) - self.norm2 = Normalize(out_channels) - self.dropout = torch.nn.Dropout(dropout) - self.conv2 = torch.nn.Conv2d( - out_channels, out_channels, kernel_size=3, stride=1, padding=1) - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - self.conv_shortcut = torch.nn.Conv2d( - in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - else: - self.nin_shortcut = torch.nn.Conv2d( - in_channels, - out_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x, temb): - h = x - h = self.norm1(h) - h = nonlinearity(h) - h = self.conv1(h) - - if temb is not None: - h = h + self.temb_proj(nonlinearity(temb))[:, :, None, None] - - h = self.norm2(h) - h = nonlinearity(h) - h = self.dropout(h) - h = self.conv2(h) - - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - x = self.conv_shortcut(x) - else: - x = self.nin_shortcut(x) - - return x + h - - -class AttnBlock(nn.Module): - - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0) - self.k = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0) - self.v = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0) - self.proj_out = torch.nn.Conv2d( - in_channels, in_channels, kernel_size=1, stride=1, padding=0) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b, c, h, w = q.shape - q = q.reshape(b, c, h * w) - q = q.permute(0, 2, 1) # b,hw,c - k = k.reshape(b, c, h * w) # b,c,hw - w_ = torch.bmm(q, k) # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j] - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = v.reshape(b, c, h * w) - w_ = w_.permute(0, 2, 1) # b,hw,hw (first hw of k, second of q) - h_ = torch.bmm( - v, w_) # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j] - h_ = h_.reshape(b, c, h, w) - - h_ = self.proj_out(h_) - - return x + h_ - - -class Model(nn.Module): - - def __init__(self, - *, - ch, - out_ch, - ch_mult=(1, 2, 4, 8), - num_res_blocks, - attn_resolutions, - dropout=0.0, - resamp_with_conv=True, - in_channels, - resolution, - use_timestep=True): - super().__init__() - self.ch = ch - self.temb_ch = self.ch * 4 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - self.use_timestep = use_timestep - if self.use_timestep: - # timestep embedding - self.temb = nn.Module() - self.temb.dense = nn.ModuleList([ - torch.nn.Linear(self.ch, self.temb_ch), - torch.nn.Linear(self.temb_ch, self.temb_ch), - ]) - - # downsampling - self.conv_in = torch.nn.Conv2d( - in_channels, self.ch, kernel_size=3, stride=1, padding=1) - - curr_res = resolution - in_ch_mult = (1, ) + tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch * in_ch_mult[i_level] - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append( - ResnetBlock( - in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions - 1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch * ch_mult[i_level] - skip_in = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - if i_block == self.num_res_blocks: - skip_in = ch * in_ch_mult[i_level] - block.append( - ResnetBlock( - in_channels=block_in + skip_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d( - block_in, out_ch, kernel_size=3, stride=1, padding=1) - - def forward(self, x, t=None): - #assert x.shape[2] == x.shape[3] == self.resolution - - if self.use_timestep: - # timestep embedding - assert t is not None - temb = get_timestep_embedding(t, self.ch) - temb = self.temb.dense[0](temb) - temb = nonlinearity(temb) - temb = self.temb.dense[1](temb) - else: - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions - 1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.up[i_level].block[i_block](torch.cat([h, hs.pop()], - dim=1), temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class Encoder(nn.Module): - - def __init__(self, - ch, - num_res_blocks, - attn_resolutions, - in_channels, - resolution, - z_channels, - ch_mult=(1, 2, 4, 8), - dropout=0.0, - resamp_with_conv=True, - double_z=True): - super().__init__() - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - # downsampling - self.conv_in = torch.nn.Conv2d( - in_channels, self.ch, kernel_size=3, stride=1, padding=1) - - curr_res = resolution - in_ch_mult = (1, ) + tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch * in_ch_mult[i_level] - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append( - ResnetBlock( - in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions - 1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d( - block_in, - 2 * z_channels if double_z else z_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - #assert x.shape[2] == x.shape[3] == self.resolution, "{}, {}, {}".format(x.shape[2], x.shape[3], self.resolution) - - # timestep embedding - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions - 1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class Decoder(nn.Module): - - def __init__(self, - in_channels, - resolution, - z_channels, - ch, - out_ch, - num_res_blocks, - attn_resolutions, - ch_mult=(1, 2, 4, 8), - dropout=0.0, - resamp_with_conv=True, - give_pre_end=False): - super().__init__() - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - self.give_pre_end = give_pre_end - - # compute in_ch_mult, block_in and curr_res at lowest res - in_ch_mult = (1, ) + tuple(ch_mult) - block_in = ch * ch_mult[self.num_resolutions - 1] - curr_res = resolution // 2**(self.num_resolutions - 1) - self.z_shape = (1, z_channels, curr_res, curr_res // 2) - print("Working with z of shape {} = {} dimensions.".format( - self.z_shape, np.prod(self.z_shape))) - - # z to block_in - self.conv_in = torch.nn.Conv2d( - z_channels, block_in, kernel_size=3, stride=1, padding=1) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - block.append( - ResnetBlock( - in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d( - block_in, out_ch, kernel_size=3, stride=1, padding=1) - - def forward(self, z, bot_h=None): - #assert z.shape[1:] == self.z_shape[1:] - self.last_z_shape = z.shape - - # timestep embedding - temb = None - - # z to block_in - h = self.conv_in(z) - - # middle - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.up[i_level].block[i_block](h, temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - if i_level == 4 and bot_h is not None: - h += bot_h - - # end - if self.give_pre_end: - return h - - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - def get_feature_top(self, z): - #assert z.shape[1:] == self.z_shape[1:] - self.last_z_shape = z.shape - - # timestep embedding - temb = None - - # z to block_in - h = self.conv_in(z) - - # middle - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.up[i_level].block[i_block](h, temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - if i_level == 4: - return h - - def get_feature_middle(self, z, mid_h): - #assert z.shape[1:] == self.z_shape[1:] - self.last_z_shape = z.shape - - # timestep embedding - temb = None - - # z to block_in - h = self.conv_in(z) - - # middle - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.up[i_level].block[i_block](h, temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - if i_level == 4: - h += mid_h - if i_level == 3: - return h - - -class DecoderRes(nn.Module): - - def __init__(self, - in_channels, - resolution, - z_channels, - ch, - num_res_blocks, - ch_mult=(1, 2, 4, 8), - dropout=0.0, - give_pre_end=False): - super().__init__() - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - self.give_pre_end = give_pre_end - - # compute in_ch_mult, block_in and curr_res at lowest res - in_ch_mult = (1, ) + tuple(ch_mult) - block_in = ch * ch_mult[self.num_resolutions - 1] - curr_res = resolution // 2**(self.num_resolutions - 1) - self.z_shape = (1, z_channels, curr_res, curr_res // 2) - print("Working with z of shape {} = {} dimensions.".format( - self.z_shape, np.prod(self.z_shape))) - - # z to block_in - self.conv_in = torch.nn.Conv2d( - z_channels, block_in, kernel_size=3, stride=1, padding=1) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock( - in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - def forward(self, z): - #assert z.shape[1:] == self.z_shape[1:] - self.last_z_shape = z.shape - - # timestep embedding - temb = None - - # z to block_in - h = self.conv_in(z) - - # middle - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - return h - - -# patch based discriminator -class Discriminator(nn.Module): - - def __init__(self, nc, ndf, n_layers=3): - super().__init__() - - layers = [ - nn.Conv2d(nc, ndf, kernel_size=4, stride=2, padding=1), - nn.LeakyReLU(0.2, True) - ] - ndf_mult = 1 - ndf_mult_prev = 1 - for n in range(1, - n_layers): # gradually increase the number of filters - ndf_mult_prev = ndf_mult - ndf_mult = min(2**n, 8) - layers += [ - nn.Conv2d( - ndf * ndf_mult_prev, - ndf * ndf_mult, - kernel_size=4, - stride=2, - padding=1, - bias=False), - nn.BatchNorm2d(ndf * ndf_mult), - nn.LeakyReLU(0.2, True) - ] - - ndf_mult_prev = ndf_mult - ndf_mult = min(2**n_layers, 8) - - layers += [ - nn.Conv2d( - ndf * ndf_mult_prev, - ndf * ndf_mult, - kernel_size=4, - stride=1, - padding=1, - bias=False), - nn.BatchNorm2d(ndf * ndf_mult), - nn.LeakyReLU(0.2, True) - ] - - layers += [ - nn.Conv2d(ndf * ndf_mult, 1, kernel_size=4, stride=1, padding=1) - ] # output 1 channel prediction map - self.main = nn.Sequential(*layers) - - def forward(self, x): - return self.main(x) diff --git a/spaces/CVPR/WALT/mmdet/core/bbox/assigners/atss_assigner.py b/spaces/CVPR/WALT/mmdet/core/bbox/assigners/atss_assigner.py deleted file mode 100644 index d4fe9d0e3c8704bd780d493eff20a5505dbe9580..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/bbox/assigners/atss_assigner.py +++ /dev/null @@ -1,178 +0,0 @@ -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class ATSSAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `0` or a positive integer - indicating the ground truth index. - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - topk (float): number of bbox selected in each level - """ - - def __init__(self, - topk, - iou_calculator=dict(type='BboxOverlaps2D'), - ignore_iof_thr=-1): - self.topk = topk - self.iou_calculator = build_iou_calculator(iou_calculator) - self.ignore_iof_thr = ignore_iof_thr - - # https://github.com/sfzhang15/ATSS/blob/master/atss_core/modeling/rpn/atss/loss.py - - def assign(self, - bboxes, - num_level_bboxes, - gt_bboxes, - gt_bboxes_ignore=None, - gt_labels=None): - """Assign gt to bboxes. - - The assignment is done in following steps - - 1. compute iou between all bbox (bbox of all pyramid levels) and gt - 2. compute center distance between all bbox and gt - 3. on each pyramid level, for each gt, select k bbox whose center - are closest to the gt center, so we total select k*l bbox as - candidates for each gt - 4. get corresponding iou for the these candidates, and compute the - mean and std, set mean + std as the iou threshold - 5. select these candidates whose iou are greater than or equal to - the threshold as positive - 6. limit the positive sample's center in gt - - - Args: - bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4). - num_level_bboxes (List): num of bboxes in each level - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - INF = 100000000 - bboxes = bboxes[:, :4] - num_gt, num_bboxes = gt_bboxes.size(0), bboxes.size(0) - - # compute iou between all bbox and gt - overlaps = self.iou_calculator(bboxes, gt_bboxes) - - # assign 0 by default - assigned_gt_inds = overlaps.new_full((num_bboxes, ), - 0, - dtype=torch.long) - - if num_gt == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = overlaps.new_zeros((num_bboxes, )) - if num_gt == 0: - # No truth, assign everything to background - assigned_gt_inds[:] = 0 - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) - - # compute center distance between all bbox and gt - gt_cx = (gt_bboxes[:, 0] + gt_bboxes[:, 2]) / 2.0 - gt_cy = (gt_bboxes[:, 1] + gt_bboxes[:, 3]) / 2.0 - gt_points = torch.stack((gt_cx, gt_cy), dim=1) - - bboxes_cx = (bboxes[:, 0] + bboxes[:, 2]) / 2.0 - bboxes_cy = (bboxes[:, 1] + bboxes[:, 3]) / 2.0 - bboxes_points = torch.stack((bboxes_cx, bboxes_cy), dim=1) - - distances = (bboxes_points[:, None, :] - - gt_points[None, :, :]).pow(2).sum(-1).sqrt() - - if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None - and gt_bboxes_ignore.numel() > 0 and bboxes.numel() > 0): - ignore_overlaps = self.iou_calculator( - bboxes, gt_bboxes_ignore, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=1) - ignore_idxs = ignore_max_overlaps > self.ignore_iof_thr - distances[ignore_idxs, :] = INF - assigned_gt_inds[ignore_idxs] = -1 - - # Selecting candidates based on the center distance - candidate_idxs = [] - start_idx = 0 - for level, bboxes_per_level in enumerate(num_level_bboxes): - # on each pyramid level, for each gt, - # select k bbox whose center are closest to the gt center - end_idx = start_idx + bboxes_per_level - distances_per_level = distances[start_idx:end_idx, :] - selectable_k = min(self.topk, bboxes_per_level) - _, topk_idxs_per_level = distances_per_level.topk( - selectable_k, dim=0, largest=False) - candidate_idxs.append(topk_idxs_per_level + start_idx) - start_idx = end_idx - candidate_idxs = torch.cat(candidate_idxs, dim=0) - - # get corresponding iou for the these candidates, and compute the - # mean and std, set mean + std as the iou threshold - candidate_overlaps = overlaps[candidate_idxs, torch.arange(num_gt)] - overlaps_mean_per_gt = candidate_overlaps.mean(0) - overlaps_std_per_gt = candidate_overlaps.std(0) - overlaps_thr_per_gt = overlaps_mean_per_gt + overlaps_std_per_gt - - is_pos = candidate_overlaps >= overlaps_thr_per_gt[None, :] - - # limit the positive sample's center in gt - for gt_idx in range(num_gt): - candidate_idxs[:, gt_idx] += gt_idx * num_bboxes - ep_bboxes_cx = bboxes_cx.view(1, -1).expand( - num_gt, num_bboxes).contiguous().view(-1) - ep_bboxes_cy = bboxes_cy.view(1, -1).expand( - num_gt, num_bboxes).contiguous().view(-1) - candidate_idxs = candidate_idxs.view(-1) - - # calculate the left, top, right, bottom distance between positive - # bbox center and gt side - l_ = ep_bboxes_cx[candidate_idxs].view(-1, num_gt) - gt_bboxes[:, 0] - t_ = ep_bboxes_cy[candidate_idxs].view(-1, num_gt) - gt_bboxes[:, 1] - r_ = gt_bboxes[:, 2] - ep_bboxes_cx[candidate_idxs].view(-1, num_gt) - b_ = gt_bboxes[:, 3] - ep_bboxes_cy[candidate_idxs].view(-1, num_gt) - is_in_gts = torch.stack([l_, t_, r_, b_], dim=1).min(dim=1)[0] > 0.01 - is_pos = is_pos & is_in_gts - - # if an anchor box is assigned to multiple gts, - # the one with the highest IoU will be selected. - overlaps_inf = torch.full_like(overlaps, - -INF).t().contiguous().view(-1) - index = candidate_idxs.view(-1)[is_pos.view(-1)] - overlaps_inf[index] = overlaps.t().contiguous().view(-1)[index] - overlaps_inf = overlaps_inf.view(num_gt, -1).t() - - max_overlaps, argmax_overlaps = overlaps_inf.max(dim=1) - assigned_gt_inds[ - max_overlaps != -INF] = argmax_overlaps[max_overlaps != -INF] + 1 - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - return AssignResult( - num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) diff --git a/spaces/CVPR/WALT/mmdet/core/bbox/builder.py b/spaces/CVPR/WALT/mmdet/core/bbox/builder.py deleted file mode 100644 index 682683b62ae55396f24e9f9eea0f8193e2e88de6..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/bbox/builder.py +++ /dev/null @@ -1,20 +0,0 @@ -from mmcv.utils import Registry, build_from_cfg - -BBOX_ASSIGNERS = Registry('bbox_assigner') -BBOX_SAMPLERS = Registry('bbox_sampler') -BBOX_CODERS = Registry('bbox_coder') - - -def build_assigner(cfg, **default_args): - """Builder of box assigner.""" - return build_from_cfg(cfg, BBOX_ASSIGNERS, default_args) - - -def build_sampler(cfg, **default_args): - """Builder of box sampler.""" - return build_from_cfg(cfg, BBOX_SAMPLERS, default_args) - - -def build_bbox_coder(cfg, **default_args): - """Builder of box coder.""" - return build_from_cfg(cfg, BBOX_CODERS, default_args) diff --git a/spaces/CVPR/lama-example/saicinpainting/training/visualizers/directory.py b/spaces/CVPR/lama-example/saicinpainting/training/visualizers/directory.py deleted file mode 100644 index bc42e00500c7a5b70b2cef83b03e45b5bb471ff8..0000000000000000000000000000000000000000 --- a/spaces/CVPR/lama-example/saicinpainting/training/visualizers/directory.py +++ /dev/null @@ -1,36 +0,0 @@ -import os - -import cv2 -import numpy as np - -from saicinpainting.training.visualizers.base import BaseVisualizer, visualize_mask_and_images_batch -from saicinpainting.utils import check_and_warn_input_range - - -class DirectoryVisualizer(BaseVisualizer): - DEFAULT_KEY_ORDER = 'image predicted_image inpainted'.split(' ') - - def __init__(self, outdir, key_order=DEFAULT_KEY_ORDER, max_items_in_batch=10, - last_without_mask=True, rescale_keys=None): - self.outdir = outdir - os.makedirs(self.outdir, exist_ok=True) - self.key_order = key_order - self.max_items_in_batch = max_items_in_batch - self.last_without_mask = last_without_mask - self.rescale_keys = rescale_keys - - def __call__(self, epoch_i, batch_i, batch, suffix='', rank=None): - check_and_warn_input_range(batch['image'], 0, 1, 'DirectoryVisualizer target image') - vis_img = visualize_mask_and_images_batch(batch, self.key_order, max_items=self.max_items_in_batch, - last_without_mask=self.last_without_mask, - rescale_keys=self.rescale_keys) - - vis_img = np.clip(vis_img * 255, 0, 255).astype('uint8') - - curoutdir = os.path.join(self.outdir, f'epoch{epoch_i:04d}{suffix}') - os.makedirs(curoutdir, exist_ok=True) - rank_suffix = f'_r{rank}' if rank is not None else '' - out_fname = os.path.join(curoutdir, f'batch{batch_i:07d}{rank_suffix}.jpg') - - vis_img = cv2.cvtColor(vis_img, cv2.COLOR_RGB2BGR) - cv2.imwrite(out_fname, vis_img) diff --git a/spaces/CVPR/v-doc_abstractive_mac/program_translator.py b/spaces/CVPR/v-doc_abstractive_mac/program_translator.py deleted file mode 100644 index f790d9d9d44ba0b45c74a81f1c39a941730b6109..0000000000000000000000000000000000000000 --- a/spaces/CVPR/v-doc_abstractive_mac/program_translator.py +++ /dev/null @@ -1,104 +0,0 @@ - -class ProgramTranslator(object): - def __init__(self, programDict, maxArity): - self.programDict = programDict - self.maxArity = maxArity - - self.maxStack = 0 - - def functionToKey(self, function, withValInputs = True): - valInputs = "" - if withValInputs: - valInputs = "_" + ",".join(function["value_inputs"]) - functionKey = function["function"] if "_" in function["function"] else \ - "_".join([function["function"], function["function"]]) - return str(len(function["inputs"])) + "_" + functionKey + valInputs - - def typeToKey(self, function, withValInputs = True): - valInputs = "" - if withValInputs: - valInputs = "_" + ",".join(function["value_inputs"]) - functionKey = function["type"] if "_" in function["type"] else \ - "_".join([function["type"], function["type"]]) - return str(len(function["inputs"])) + "_" + functionKey + valInputs - - def keyToFunction(self, key): - assert key not in self.programDict.invalidSymbols - function = {} - parts = key.split("_") - arity = int(parts[0]) - function["function"] = "_".join([parts[1], parts[2]]) - function["value_inputs"] = [] - if len(parts) == 4: - function["value_inputs"] = parts[3].split(",") - function["inputs"] = [] - return function, arity - - def keyToArity(self, key): - if key in self.programDict.invalidSymbols: - return 0 - return int(key.split("_")[0]) - - def keyToType(self, key): - if key in self.programDict.invalidSymbols: - return ["0", "0", "0"] - return ["0:" + key.split("_")[0], "1:" + key.split("_")[1], "2:" + key.split("_")[2]] - - def programToPostfixProgram(self, program): - newProgram = [] - - def programToPostfixAux(currIndex = -1): - childrenIndices = program[currIndex]["inputs"] - #[int(child) for child in program[currIndex]["inputs"]] - childrenNewIndices = [] - for child in childrenIndices: - programToPostfixAux(child) - childrenNewIndices.append(len(newProgram) - 1) - program[currIndex]["inputs"] = childrenNewIndices - newProgram.append(program[currIndex]) - - programToPostfixAux() - return newProgram - - def programToSeq(self, program): - return [self.functionToKey(function) for function in program] - - def pdfProgramToSeq(self, program): - return [self.typeToKey(function) for function in program] - - def programToInputs(self, program, offset = 0): - inputs = [function["inputs"] for function in program] - offsetedInputs = [[FuncInput + offset for FuncInput in FuncInputs] for FuncInputs in inputs] - return offsetedInputs - - # def seqToProgram(self, seq, enforceValidPrograms = True): - # program = [] - - # def seqToProgramAux(currIndex = len(seq) - 1): - # if currIndex < 0: - # program = None - # return - # currFunc, arity = self.keyToFunction(seq[currIndex]) - # nextIndex = currIndex - 1 - # program.append(currFunc) - # for _ in arity: - # currFunc["inputs"].append(nextIndex) - # nextIndex = seqToProgramAux(nextIndex) - # currFunc["inputs"].reverse() - # return nextIndex - - # if enforceValidPrograms: - # seqToProgramAux() - # if program is not None: - # program.reverse() - # else: - # stack = [0] * self.maxArity - # for i in range(len(seq)): - # func, arity = self.keyToFunction(seq[i]) - # func["inputs"] = stack[len(stack) - arity:] - # newLength = max(len(stack) - arity, self.maxArity) - # stack = stack[:newLength] + [i + self.maxArity] - # self.maxStack = max(len(stack), self.maxStack) - # program.append(func) - - # return program diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cuda.h b/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cuda.h deleted file mode 100644 index ad1311a78f61303616504eb991aaa9c4a93d9948..0000000000000000000000000000000000000000 --- a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cuda.h +++ /dev/null @@ -1,33 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#pragma once -#include - -namespace groundingdino { - -at::Tensor ms_deform_attn_cuda_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step); - -std::vector ms_deform_attn_cuda_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step); - -} // namespace groundingdino \ No newline at end of file diff --git a/spaces/Cecil8352/vits-models/utils.py b/spaces/Cecil8352/vits-models/utils.py deleted file mode 100644 index ee4b01ddfbe8173965371b29f770f3e87615fe71..0000000000000000000000000000000000000000 --- a/spaces/Cecil8352/vits-models/utils.py +++ /dev/null @@ -1,225 +0,0 @@ -import os -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -import librosa -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_audio_to_torch(full_path, target_sampling_rate): - audio, sampling_rate = librosa.load(full_path, sr=target_sampling_rate, mono=True) - return torch.FloatTensor(audio.astype(np.float32)) - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/ClipHamper/stable-diffusion-webui/app.py b/spaces/ClipHamper/stable-diffusion-webui/app.py deleted file mode 100644 index 00c05986f7e088955e9aecbb5657c3be8dfce651..0000000000000000000000000000000000000000 --- a/spaces/ClipHamper/stable-diffusion-webui/app.py +++ /dev/null @@ -1,190 +0,0 @@ -""" -Stable Diffusion Webui Version 1.6 -https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.6.0 - -""" -commit_id=r"5ef669de080814067961f28357256e8fe27544f4" #Version 1.3.0 -import os -from sys import executable -import subprocess -import pathlib -import gc - -def Gitclone(URI:str,ClonePath:pathlib.Path ) -> int : - if pathlib.Path.exists(ClonePath): - return 0 - for z in range(10): - i=subprocess.run([r"git",r"clone",str(URI),str(ClonePath)]) - if(i.returncode == 0 ): - del i - return 0 - else : - del i - raise Exception(str.format("clone \'{0}\' failed",URI)) - - -def DownLoad(URI:str,DownloadPath:pathlib.Path,DownLoadFileName:str ) -> int: - if (DownloadPath / DownLoadFileName).is_file(): return 0 - for z in range(10): - i=subprocess.run([r"aria2c",r"-c",r"-x" ,r"16", r"-s",r"16", r"-k" ,r"1M" ,r"-m",r"0",r"--enable-mmap=false",r"--console-log-level=error",r"-d",str(DownloadPath),r"-o",DownLoadFileName,URI]); - if(i.returncode == 0 ): - del i - gc.collect() - return 0 - else : - del i - raise Exception(str.format("download \'{0}\' failed",URI)) - -user_home =pathlib.Path.home().resolve() -os.chdir(str(user_home)) -#clone stable-diffusion-webui repo -print("cloning stable-diffusion-webui repo") -Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui.git",user_home / r"stable-diffusion-webui") -os.chdir(str(user_home / r"stable-diffusion-webui")) -os.system("git reset --hard "+commit_id) -#install extensions -print("installing extensions") -Gitclone(r"https://github.com/vorstcavry/embeddings",user_home / r"stable-diffusion-webui" / r"embeddings" / r"negative") -Gitclone(r"https://github.com/vorstcavry/lora",user_home / r"stable-diffusion-webui" / r"models" / r"Lora" / r"positive") -Gitclone(r"https://github.com/vorstcavry/Checkpoint-Model",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint") - -DownLoad(r"https://huggingface.co/embed/upscale/resolve/main/4x-UltraSharp.pth",user_home / r"stable-diffusion-webui" / r"models" / r"ESRGAN" ,r"4x-UltraSharp.pth") -while (True): - i=subprocess.run([r"wget",r"https://raw.githubusercontent.com/camenduru/stable-diffusion-webui-scripts/main/run_n_times.py",r"-O",str(user_home / r"stable-diffusion-webui" / r"scripts" / r"run_n_times.py")]) - if(i.returncode == 0 ): - del i - gc.collect() - break - else : - del i -Gitclone(r"https://github.com/deforum-art/deforum-for-automatic1111-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"deforum-for-automatic1111-webui" ) -Gitclone(r"https://github.com/AlUlkesh/stable-diffusion-webui-images-browser",user_home / r"stable-diffusion-webui" / r"extensions"/ r"stable-diffusion-webui-images-browser") -Gitclone(r"https://github.com/camenduru/stable-diffusion-webui-huggingface",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-huggingface") -Gitclone(r"https://github.com/BlafKing/sd-civitai-browser-plus",user_home / r"stable-diffusion-webui" / r"extensions" / r"civitai-browser") -Gitclone(r"https://github.com/kohya-ss/sd-webui-additional-networks",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks") -Gitclone(r"https://github.com/Mikubill/sd-webui-controlnet",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-controlnet") -Gitclone(r"https://github.com/fkunn1326/openpose-editor",user_home / r"stable-diffusion-webui" / r"extensions" / r"openpose-editor") -Gitclone(r"https://github.com/jexom/sd-webui-depth-lib",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-depth-lib") -Gitclone(r"https://github.com/hnmr293/posex",user_home / r"stable-diffusion-webui" / r"extensions" / r"posex") -Gitclone(r"https://github.com/nonnonstop/sd-webui-3d-open-pose-editor",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-3d-open-pose-editor") -#中文本地化的请解除下一行的注释 -#Gitclone(r"https://github.com/dtlnor/stable-diffusion-webui-localization-zh_CN.git",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-localization-zh_CN") -Gitclone(r"https://github.com/DominikDoom/a1111-sd-webui-tagcomplete.git" , user_home / r"stable-diffusion-webui" / r"extensions" / r"a1111-sd-webui-tagcomplete") -Gitclone(r"https://github.com/camenduru/sd-webui-tunnels",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-tunnels") -Gitclone(r"https://github.com/etherealxx/batchlinks-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"batchlinks-webui") -Gitclone(r"https://github.com/catppuccin/stable-diffusion-webui",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-catppuccin") -Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-rembg") -Gitclone(r"https://tinyurl.com/aspect-ratio-v",user_home / r"stable-diffusion-webui" / r"extensions" / r"aspect-ratio") -Gitclone(r"https://github.com/Iyashinouta/sd-model-downloader",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-model-downloader") -Gitclone(r"https://github.com/AIrjen/OneButtonPrompt",user_home / r"stable-diffusion-webui" / r"extensions" / r"OneButtonPrompt") -Gitclone(r"https://github.com/AUTOMATIC1111/stable-diffusion-webui-wildcards",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-wildcards") -Gitclone(r"https://github.com/adieyal/sd-dynamic-prompts",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-dynamic-prompts") -Gitclone(r"https://github.com/d8ahazard/sd_dreambooth_extension",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd_dreambooth_extension") -Gitclone(r"https://github.com/yfszzx/stable-diffusion-webui-inspiration",user_home / r"stable-diffusion-webui" / r"extensions" / r"stable-diffusion-webui-inspiration") -Gitclone(r"https://github.com/Coyote-A/ultimate-upscale-for-automatic1111",user_home / r"stable-diffusion-webui" / r"extensions" / r"ultimate-upscale-for-automatic1111") -os.chdir(user_home / r"stable-diffusion-webui") -#download ControlNet models -print("extensions dolwnload done .\ndownloading ControlNet models") -dList =[r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_ip2p_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11e_sd15_shuffle_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_canny_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1p_sd15_depth_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_inpaint_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_lineart_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_mlsd_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_normalbae_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_openpose_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_scribble_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_seg_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15_softedge_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/control_v11f1e_sd15_tile_fp16.safetensors", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_ip2p_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11e_sd15_shuffle_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_canny_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1p_sd15_depth_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_inpaint_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_lineart_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_mlsd_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_normalbae_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_openpose_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_scribble_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_seg_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15_softedge_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11p_sd15s2_lineart_anime_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/raw/main/control_v11f1e_sd15_tile_fp16.yaml", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_style_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_seg_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_openpose_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_keypose_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd14v1.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_canny_sd15v2.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_depth_sd15v2.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_sketch_sd15v2.pth", - r"https://huggingface.co/ckpt/ControlNet-v1-1/resolve/main/t2iadapter_zoedepth_sd15v1.pth"] -for i in range(0,len(dList)): DownLoad(dList[i],user_home / r"stable-diffusion-webui" / r"extensions" / "sd-webui-controlnet" / r"models",pathlib.Path(dList[i]).name) -del dList -#download model -#you can change model download address here -print("ControlNet models download done.\ndownloading model") -#Stable Diffusion Checkpoint Model -#anything version4.5 -#DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.5-pruned.ckpt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"anything-v4.5-pruned.ckpt") -#DownLoad(r"https://huggingface.co/ckpt/anything-v4.0/resolve/main/anything-v4.0.vae.pt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"anything-v4.0.vae.pt") -#Counterfeit-V3.0 -#DownLoad(r"https://huggingface.co/gsdf/Counterfeit-V3.0/resolve/main/Counterfeit-V3.0_fp16.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"Counterfeit-V3.0_fp16.safetensors") -#AbyssOrangeMix2 sfw -#DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/Models/AbyssOrangeMix2/AbyssOrangeMix2_sfw.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"AbyssOrangeMix2_sfw.safetensors") -#DownLoad(r"https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"orangemix.vae.pt") -#MeinaPastelV5 -#DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV5%20-%20Baked%20VAE.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"MeinaPastelV5_BakedVAE.safetensors") -#DownLoad(r"https://huggingface.co/AnonPerson/ChilloutMix/resolve/main/ChilloutMix-ni-fp16.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"ChilloutMix-ni-fp16.safetensors") -#DownLoad(r"https://huggingface.co/Meina/MeinaPastel/resolve/main/MeinaPastelV4%20-%20Without%20VAE.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"MeinaPastelV4%20-%20Without%20VAE.safetensors") -#DownLoad(r"https://huggingface.co/ckpt/perfect_world/resolve/main/perfectWorld_v2Baked.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"perfectWorld_v2Baked.safetensors") -#DownLoad(r"https://huggingface.co/vorstcavry/figurestyle1/resolve/main/figure.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"figure.safetensors") -#DownLoad(r"https://huggingface.co/vorstcavry/dosmix/resolve/main/ddosmix_V2.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"ddosmix_V2.safetensors") -#DownLoad(r"https://huggingface.co/ckpt/rev-animated/resolve/main/revAnimated_v11.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"revAnimated_v11.safetensors") -#DownLoad(r"https://huggingface.co/ckpt/MeinaMix/resolve/main/Meina_V8_baked_VAE.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"Meina_V8_baked_VAE.safetensors") -#DownLoad(r"https://huggingface.co/ckpt/CyberRealistic/resolve/main/cyberrealistic_v13.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"cyberrealistic_v13.safetensors") -DownLoad(r"https://huggingface.co/vorstcavry/mymodel/resolve/main/Cavry_V2.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"Stable-diffusion" / r"Checkpoint",r"Cavry_V2.safetensors") -#downloadvae -DownLoad(r"https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.safetensors",user_home / r"stable-diffusion-webui" / r"models" / r"VAE",r"vae-ft-mse-840000-ema-pruned.safetensors") - -#Lora Model -#Better Light -#DownLoad(r"https://civitai.com/api/download/models/39885",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"Better_light.safetensors") -#DownLoad(r"https://civitai.com/api/download/models/39885",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"Better_light.safetensors") -#LAS -#DownLoad(r"https://civitai.com/api/download/models/21065",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"LAS.safetensors") -#DownLoad(r"https://civitai.com/api/download/models/21065",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"LAS.safetensors") -#Backlighting -#DownLoad(r"https://civitai.com/api/download/models/39164",user_home / r"stable-diffusion-webui" / r"extensions" / r"sd-webui-additional-networks" / r"models"/ r"lora",r"backlighting.safetensors") -#DownLoad(r"https://civitai.com/api/download/models/39164",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"backlighting.safetensors") -DownLoad(r"https://huggingface.co/vorstcavry/loraasia1/resolve/main/japaneseDollLikeness_v15.safetensors",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"japaneseDollLikeness_v15.safetensors") -DownLoad(r"https://huggingface.co/vorstcavry/loraasia1/resolve/main/koreanDollLikeness_v20.safetensors",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"koreanDollLikeness_v20.safetensors") -DownLoad(r"https://huggingface.co/vorstcavry/loraasia1/resolve/main/taiwanDollLikeness_v15.safetensors",user_home / r"stable-diffusion-webui" / r"models"/ r"lora",r"taiwanDollLikeness_v15.safetensors") - - - - -#GFPGAN Model -#detection Resnet50 -DownLoad(r"https://github.com/xinntao/facexlib/releases/download/v0.1.0/detection_Resnet50_Final.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"detection_Resnet50_Final.pth") -#parsing_parsenet -DownLoad(r"https://github.com/xinntao/facexlib/releases/download/v0.2.2/parsing_parsenet.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"parsing_parsenet.pth") -#GFPGANv1.4 -DownLoad(r"https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth",user_home / r"stable-diffusion-webui"/r"models"/r"GFPGAN",r"GFPGANv1.4.pth") -#strt Stable Diffusion Webui -print("Done\nStarting Webui...") -os.chdir(user_home / r"stable-diffusion-webui") -gc.collect() -while True: - ret=subprocess.run([executable ,user_home / r"stable-diffusion-webui" / r"launch.py",r"--precision",r"full",r"--no-half",r"--no-half-vae",r"--enable-insecure-extension-access",r"--medvram",r"--skip-torch-cuda-test",r"--enable-console-prompts",r"--ui-settings-file="+str(pathlib.Path(__file__).parent /r"config.json")]) - if(ret.returncode == 0 ): - del ret - gc.collect() - else : - del ret -del os ,user_home ,pyexecutable ,subprocess \ No newline at end of file diff --git a/spaces/Cran-May/SEA-orca/README.md b/spaces/Cran-May/SEA-orca/README.md deleted file mode 100644 index 5e9a064b9c395116f8dfea377f9ac76ea847ef2c..0000000000000000000000000000000000000000 --- a/spaces/Cran-May/SEA-orca/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Shi-CI Extensional Analyzer -emoji: ⚡ -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.45.2 -app_file: app.py -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/cldm/model.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/cldm/model.py deleted file mode 100644 index fed3c31ac145b78907c7f771d1d8db6fb32d92ed..0000000000000000000000000000000000000000 --- a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/cldm/model.py +++ /dev/null @@ -1,28 +0,0 @@ -import os -import torch - -from omegaconf import OmegaConf -from ldm.util import instantiate_from_config - - -def get_state_dict(d): - return d.get('state_dict', d) - - -def load_state_dict(ckpt_path, location='cpu'): - _, extension = os.path.splitext(ckpt_path) - if extension.lower() == ".safetensors": - import safetensors.torch - state_dict = safetensors.torch.load_file(ckpt_path, device=location) - else: - state_dict = get_state_dict(torch.load(ckpt_path, map_location=torch.device(location))) - state_dict = get_state_dict(state_dict) - print(f'Loaded state_dict from [{ckpt_path}]') - return state_dict - - -def create_model(config_path): - config = OmegaConf.load(config_path) - model = instantiate_from_config(config.model).cpu() - print(f'Loaded model config from [{config_path}]') - return model diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otTraverse.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otTraverse.py deleted file mode 100644 index bf22dcfdb500cd50525fce749562384a82b1cb0f..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otTraverse.py +++ /dev/null @@ -1,161 +0,0 @@ -"""Methods for traversing trees of otData-driven OpenType tables.""" -from collections import deque -from typing import Callable, Deque, Iterable, List, Optional, Tuple -from .otBase import BaseTable - - -__all__ = [ - "bfs_base_table", - "dfs_base_table", - "SubTablePath", -] - - -class SubTablePath(Tuple[BaseTable.SubTableEntry, ...]): - def __str__(self) -> str: - path_parts = [] - for entry in self: - path_part = entry.name - if entry.index is not None: - path_part += f"[{entry.index}]" - path_parts.append(path_part) - return ".".join(path_parts) - - -# Given f(current frontier, new entries) add new entries to frontier -AddToFrontierFn = Callable[[Deque[SubTablePath], List[SubTablePath]], None] - - -def dfs_base_table( - root: BaseTable, - root_accessor: Optional[str] = None, - skip_root: bool = False, - predicate: Optional[Callable[[SubTablePath], bool]] = None, - iter_subtables_fn: Optional[ - Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]] - ] = None, -) -> Iterable[SubTablePath]: - """Depth-first search tree of BaseTables. - - Args: - root (BaseTable): the root of the tree. - root_accessor (Optional[str]): attribute name for the root table, if any (mostly - useful for debugging). - skip_root (Optional[bool]): if True, the root itself is not visited, only its - children. - predicate (Optional[Callable[[SubTablePath], bool]]): function to filter out - paths. If True, the path is yielded and its subtables are added to the - queue. If False, the path is skipped and its subtables are not traversed. - iter_subtables_fn (Optional[Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]]]): - function to iterate over subtables of a table. If None, the default - BaseTable.iterSubTables() is used. - - Yields: - SubTablePath: tuples of BaseTable.SubTableEntry(name, table, index) namedtuples - for each of the nodes in the tree. The last entry in a path is the current - subtable, whereas preceding ones refer to its parent tables all the way up to - the root. - """ - yield from _traverse_ot_data( - root, - root_accessor, - skip_root, - predicate, - lambda frontier, new: frontier.extendleft(reversed(new)), - iter_subtables_fn, - ) - - -def bfs_base_table( - root: BaseTable, - root_accessor: Optional[str] = None, - skip_root: bool = False, - predicate: Optional[Callable[[SubTablePath], bool]] = None, - iter_subtables_fn: Optional[ - Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]] - ] = None, -) -> Iterable[SubTablePath]: - """Breadth-first search tree of BaseTables. - - Args: - the root of the tree. - root_accessor (Optional[str]): attribute name for the root table, if any (mostly - useful for debugging). - skip_root (Optional[bool]): if True, the root itself is not visited, only its - children. - predicate (Optional[Callable[[SubTablePath], bool]]): function to filter out - paths. If True, the path is yielded and its subtables are added to the - queue. If False, the path is skipped and its subtables are not traversed. - iter_subtables_fn (Optional[Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]]]): - function to iterate over subtables of a table. If None, the default - BaseTable.iterSubTables() is used. - - Yields: - SubTablePath: tuples of BaseTable.SubTableEntry(name, table, index) namedtuples - for each of the nodes in the tree. The last entry in a path is the current - subtable, whereas preceding ones refer to its parent tables all the way up to - the root. - """ - yield from _traverse_ot_data( - root, - root_accessor, - skip_root, - predicate, - lambda frontier, new: frontier.extend(new), - iter_subtables_fn, - ) - - -def _traverse_ot_data( - root: BaseTable, - root_accessor: Optional[str], - skip_root: bool, - predicate: Optional[Callable[[SubTablePath], bool]], - add_to_frontier_fn: AddToFrontierFn, - iter_subtables_fn: Optional[ - Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]] - ] = None, -) -> Iterable[SubTablePath]: - # no visited because general otData cannot cycle (forward-offset only) - if root_accessor is None: - root_accessor = type(root).__name__ - - if predicate is None: - - def predicate(path): - return True - - if iter_subtables_fn is None: - - def iter_subtables_fn(table): - return table.iterSubTables() - - frontier: Deque[SubTablePath] = deque() - - root_entry = BaseTable.SubTableEntry(root_accessor, root) - if not skip_root: - frontier.append((root_entry,)) - else: - add_to_frontier_fn( - frontier, - [ - (root_entry, subtable_entry) - for subtable_entry in iter_subtables_fn(root) - ], - ) - - while frontier: - # path is (value, attr_name) tuples. attr_name is attr of parent to get value - path = frontier.popleft() - current = path[-1].value - - if not predicate(path): - continue - - yield SubTablePath(path) - - new_entries = [ - path + (subtable_entry,) for subtable_entry in iter_subtables_fn(current) - ] - - add_to_frontier_fn(frontier, new_entries) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Button-9b719f62.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Button-9b719f62.css deleted file mode 100644 index 1febd1de643feeadb668f5d0fc297f661ce47482..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Button-9b719f62.css +++ /dev/null @@ -1 +0,0 @@ -.block.svelte-90oupt{position:relative;margin:0;box-shadow:var(--block-shadow);border-width:var(--block-border-width);border-color:var(--block-border-color);border-radius:var(--block-radius);background:var(--block-background-fill);width:100%;line-height:var(--line-sm)}.block.border_focus.svelte-90oupt{border-color:var(--color-accent)}.padded.svelte-90oupt{padding:var(--block-padding)}.hidden.svelte-90oupt{display:none}.hide-container.svelte-90oupt{margin:0;box-shadow:none;--block-border-width:0;background:transparent;padding:0;overflow:visible}div.svelte-e8n7p6{margin-bottom:var(--spacing-lg);color:var(--block-info-text-color);font-weight:var(--block-info-text-weight);font-size:var(--block-info-text-size);line-height:var(--line-sm)}span.has-info.svelte-1gfkn6j{margin-bottom:var(--spacing-xs)}span.svelte-1gfkn6j:not(.has-info){margin-bottom:var(--spacing-lg)}span.svelte-1gfkn6j{display:inline-block;position:relative;z-index:var(--layer-4);border:solid var(--block-title-border-width) var(--block-title-border-color);border-radius:var(--block-title-radius);background:var(--block-title-background-fill);padding:var(--block-title-padding);color:var(--block-title-text-color);font-weight:var(--block-title-text-weight);font-size:var(--block-title-text-size);line-height:var(--line-sm)}.hide.svelte-1gfkn6j{margin:0;height:0}div.svelte-1mwvhlq{display:inline-flex;align-items:center;z-index:var(--layer-2);box-shadow:var(--block-label-shadow);border:var(--block-label-border-width) solid var(--border-color-primary);border-top:none;border-left:none;border-radius:var(--block-label-radius);background:var(--block-label-background-fill);padding:var(--block-label-padding);pointer-events:none;color:var(--block-label-text-color);font-weight:var(--block-label-text-weight);font-size:var(--block-label-text-size);line-height:var(--line-sm)}.gr-group div.svelte-1mwvhlq{border-top-left-radius:0}div.float.svelte-1mwvhlq{position:absolute;top:var(--block-label-margin);left:var(--block-label-margin)}div.svelte-1mwvhlq:not(.float){position:static;margin-top:var(--block-label-margin);margin-left:var(--block-label-margin)}.hide.svelte-1mwvhlq{height:0}span.svelte-1mwvhlq{opacity:.8;margin-right:var(--size-2);width:calc(var(--block-label-text-size) - 1px);height:calc(var(--block-label-text-size) - 1px)}.hide-label.svelte-1mwvhlq{box-shadow:none;border-width:0;background:transparent;overflow:visible}button.svelte-1030q2h{display:flex;justify-content:center;align-items:center;gap:1px;z-index:var(--layer-1);box-shadow:var(--shadow-drop);border:1px solid var(--button-secondary-border-color);border-radius:var(--radius-sm);background:var(--background-fill-primary);padding:2px;color:var(--block-label-text-color)}button.svelte-1030q2h:hover{cursor:pointer;border:2px solid var(--button-secondary-border-color-hover);padding:1px;color:var(--block-label-text-color)}span.svelte-1030q2h{padding:0 1px;font-size:10px}div.svelte-1030q2h{padding:2px;width:14px;height:14px}.pending.svelte-1030q2h{animation:svelte-1030q2h-flash .5s infinite}@keyframes svelte-1030q2h-flash{0%{opacity:.5}50%{opacity:1}to{opacity:.5}}.empty.svelte-lk9eg8{display:flex;justify-content:center;align-items:center;margin-top:calc(0px - var(--size-6));height:var(--size-full)}.icon.svelte-lk9eg8{opacity:.5;height:var(--size-5);color:var(--body-text-color)}.small.svelte-lk9eg8{min-height:calc(var(--size-32) - 20px)}.large.svelte-lk9eg8{min-height:calc(var(--size-64) - 20px)}.unpadded_box.svelte-lk9eg8{margin-top:0}.small_parent.svelte-lk9eg8{min-height:100%!important}.dropdown-arrow.svelte-p5edak{fill:var(--body-text-color);margin-right:var(--size-2);width:var(--size-5)}button.svelte-1e89no8{display:inline-flex;justify-content:center;align-items:center;transition:var(--button-transition);box-shadow:var(--button-shadow);padding:var(--size-0-5) var(--size-2);text-align:center}button.svelte-1e89no8:hover,button[disabled].svelte-1e89no8{box-shadow:var(--button-shadow-hover)}button.svelte-1e89no8:active{box-shadow:var(--button-shadow-active)}button[disabled].svelte-1e89no8{opacity:.5;filter:grayscale(30%);cursor:not-allowed}.hidden.svelte-1e89no8{display:none}.primary.svelte-1e89no8{border:var(--button-border-width) solid var(--button-primary-border-color);background:var(--button-primary-background-fill);color:var(--button-primary-text-color)}.primary.svelte-1e89no8:hover,.primary[disabled].svelte-1e89no8{border-color:var(--button-primary-border-color-hover);background:var(--button-primary-background-fill-hover);color:var(--button-primary-text-color-hover)}.secondary.svelte-1e89no8{border:var(--button-border-width) solid var(--button-secondary-border-color);background:var(--button-secondary-background-fill);color:var(--button-secondary-text-color)}.secondary.svelte-1e89no8:hover,.secondary[disabled].svelte-1e89no8{border-color:var(--button-secondary-border-color-hover);background:var(--button-secondary-background-fill-hover);color:var(--button-secondary-text-color-hover)}.stop.svelte-1e89no8{border:var(--button-border-width) solid var(--button-cancel-border-color);background:var(--button-cancel-background-fill);color:var(--button-cancel-text-color)}.stop.svelte-1e89no8:hover,.stop[disabled].svelte-1e89no8{border-color:var(--button-cancel-border-color-hover);background:var(--button-cancel-background-fill-hover);color:var(--button-cancel-text-color-hover)}.sm.svelte-1e89no8{border-radius:var(--button-small-radius);padding:var(--button-small-padding);font-weight:var(--button-small-text-weight);font-size:var(--button-small-text-size)}.lg.svelte-1e89no8{border-radius:var(--button-large-radius);padding:var(--button-large-padding);font-weight:var(--button-large-text-weight);font-size:var(--button-large-text-size)} diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-ae57ca19.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-ae57ca19.js deleted file mode 100644 index 30c112e12695d0ee969a974e89b676b5aa8218ab..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-ae57ca19.js +++ /dev/null @@ -1,2 +0,0 @@ -import{P as j,N as G,c as E,D as U,e as w,T as b,I as H}from"./index-f90e1963.js";class P{constructor(t,e,s,i,h,r,n,a,l,f=0,u){this.p=t,this.stack=e,this.state=s,this.reducePos=i,this.pos=h,this.score=r,this.buffer=n,this.bufferBase=a,this.curContext=l,this.lookAhead=f,this.parent=u}toString(){return`[${this.stack.filter((t,e)=>e%3==0).concat(this.state)}]@${this.pos}${this.score?"!"+this.score:""}`}static start(t,e,s=0){let i=t.parser.context;return new P(t,[],e,s,s,0,[],0,i?new y(i,i.start):null,0,null)}get context(){return this.curContext?this.curContext.context:null}pushState(t,e){this.stack.push(this.state,e,this.bufferBase+this.buffer.length),this.state=t}reduce(t){var e;let s=t>>19,i=t&65535,{parser:h}=this.p,r=h.dynamicPrecedence(i);if(r&&(this.score+=r),s==0){this.pushState(h.getGoto(this.state,i,!0),this.reducePos),i=2e3&&!(!((e=this.p.parser.nodeSet.types[i])===null||e===void 0)&&e.isAnonymous)&&(a==this.p.lastBigReductionStart?(this.p.bigReductionCount++,this.p.lastBigReductionSize=l):this.p.lastBigReductionSizen;)this.stack.pop();this.reduceContext(i,a)}storeNode(t,e,s,i=4,h=!1){if(t==0&&(!this.stack.length||this.stack[this.stack.length-1]0&&r.buffer[n-4]==0&&r.buffer[n-1]>-1){if(e==s)return;if(r.buffer[n-2]>=e){r.buffer[n-2]=s;return}}}if(!h||this.pos==s)this.buffer.push(t,e,s,i);else{let r=this.buffer.length;if(r>0&&this.buffer[r-4]!=0)for(;r>0&&this.buffer[r-2]>s;)this.buffer[r]=this.buffer[r-4],this.buffer[r+1]=this.buffer[r-3],this.buffer[r+2]=this.buffer[r-2],this.buffer[r+3]=this.buffer[r-1],r-=4,i>4&&(i-=4);this.buffer[r]=t,this.buffer[r+1]=e,this.buffer[r+2]=s,this.buffer[r+3]=i}}shift(t,e,s){let i=this.pos;if(t&131072)this.pushState(t&65535,this.pos);else if(t&262144)this.pos=s,this.shiftContext(e,i),e<=this.p.parser.maxNode&&this.buffer.push(e,i,s,4);else{let h=t,{parser:r}=this.p;(s>this.pos||e<=r.maxNode)&&(this.pos=s,r.stateFlag(h,1)||(this.reducePos=s)),this.pushState(h,i),this.shiftContext(e,i),e<=r.maxNode&&this.buffer.push(e,i,s,4)}}apply(t,e,s){t&65536?this.reduce(t):this.shift(t,e,s)}useNode(t,e){let s=this.p.reused.length-1;(s<0||this.p.reused[s]!=t)&&(this.p.reused.push(t),s++);let i=this.pos;this.reducePos=this.pos=i+t.length,this.pushState(e,i),this.buffer.push(s,i,this.reducePos,-1),this.curContext&&this.updateContext(this.curContext.tracker.reuse(this.curContext.context,t,this,this.p.stream.reset(this.pos-t.length)))}split(){let t=this,e=t.buffer.length;for(;e>0&&t.buffer[e-2]>t.reducePos;)e-=4;let s=t.buffer.slice(e),i=t.bufferBase+e;for(;t&&i==t.bufferBase;)t=t.parent;return new P(this.p,this.stack.slice(),this.state,this.reducePos,this.pos,this.score,s,i,this.curContext,this.lookAhead,t)}recoverByDelete(t,e){let s=t<=this.p.parser.maxNode;s&&this.storeNode(t,this.pos,e,4),this.storeNode(0,this.pos,e,s?8:4),this.pos=this.reducePos=e,this.score-=190}canShift(t){for(let e=new W(this);;){let s=this.p.parser.stateSlot(e.state,4)||this.p.parser.hasAction(e.state,t);if(s==0)return!1;if(!(s&65536))return!0;e.reduce(s)}}recoverByInsert(t){if(this.stack.length>=300)return[];let e=this.p.parser.nextStates(this.state);if(e.length>4<<1||this.stack.length>=120){let i=[];for(let h=0,r;ha&1&&n==r)||i.push(e[h],r)}e=i}let s=[];for(let i=0;i>19,i=t&65535,h=this.stack.length-s*3;if(h<0||e.getGoto(this.stack[h],i,!1)<0)return!1;this.storeNode(0,this.reducePos,this.reducePos,4,!0),this.score-=100}return this.reducePos=this.pos,this.reduce(t),!0}forceAll(){for(;!this.p.parser.stateFlag(this.state,2);)if(!this.forceReduce()){this.storeNode(0,this.pos,this.pos,4,!0);break}return this}get deadEnd(){if(this.stack.length!=3)return!1;let{parser:t}=this.p;return t.data[t.stateSlot(this.state,1)]==65535&&!t.stateSlot(this.state,4)}restart(){this.state=this.stack[0],this.stack.length=0}sameState(t){if(this.state!=t.state||this.stack.length!=t.stack.length)return!1;for(let e=0;ethis.lookAhead&&(this.emitLookAhead(),this.lookAhead=t)}close(){this.curContext&&this.curContext.tracker.strict&&this.emitContext(),this.lookAhead>0&&this.emitLookAhead()}}class y{constructor(t,e){this.tracker=t,this.context=e,this.hash=t.strict?t.hash(e):0}}var N;(function(o){o[o.Insert=200]="Insert",o[o.Delete=190]="Delete",o[o.Reduce=100]="Reduce",o[o.MaxNext=4]="MaxNext",o[o.MaxInsertStackDepth=300]="MaxInsertStackDepth",o[o.DampenInsertStackDepth=120]="DampenInsertStackDepth",o[o.MinBigReduction=2e3]="MinBigReduction"})(N||(N={}));class W{constructor(t){this.start=t,this.state=t.state,this.stack=t.stack,this.base=this.stack.length}reduce(t){let e=t&65535,s=t>>19;s==0?(this.stack==this.start.stack&&(this.stack=this.stack.slice()),this.stack.push(this.state,0,0),this.base+=3):this.base-=(s-1)*3;let i=this.start.p.parser.getGoto(this.stack[this.base-3],e,!0);this.state=i}}class C{constructor(t,e,s){this.stack=t,this.pos=e,this.index=s,this.buffer=t.buffer,this.index==0&&this.maybeNext()}static create(t,e=t.bufferBase+t.buffer.length){return new C(t,e,e-t.bufferBase)}maybeNext(){let t=this.stack.parent;t!=null&&(this.index=this.stack.bufferBase-t.bufferBase,this.stack=t,this.buffer=t.buffer)}get id(){return this.buffer[this.index-4]}get start(){return this.buffer[this.index-3]}get end(){return this.buffer[this.index-2]}get size(){return this.buffer[this.index-1]}next(){this.index-=4,this.pos-=4,this.index==0&&this.maybeNext()}fork(){return new C(this.stack,this.pos,this.index)}}function x(o,t=Uint16Array){if(typeof o!="string")return o;let e=null;for(let s=0,i=0;s=92&&r--,r>=34&&r--;let a=r-32;if(a>=46&&(a-=46,n=!0),h+=a,n)break;h*=46}e?e[i++]=h:e=new t(h)}return e}class S{constructor(){this.start=-1,this.value=-1,this.end=-1,this.extended=-1,this.lookAhead=0,this.mask=0,this.context=0}}const D=new S;class q{constructor(t,e){this.input=t,this.ranges=e,this.chunk="",this.chunkOff=0,this.chunk2="",this.chunk2Pos=0,this.next=-1,this.token=D,this.rangeIndex=0,this.pos=this.chunkPos=e[0].from,this.range=e[0],this.end=e[e.length-1].to,this.readNext()}resolveOffset(t,e){let s=this.range,i=this.rangeIndex,h=this.pos+t;for(;hs.to:h>=s.to;){if(i==this.ranges.length-1)return null;let r=this.ranges[++i];h+=r.from-s.to,s=r}return h}clipPos(t){if(t>=this.range.from&&tt)return Math.max(t,e.from);return this.end}peek(t){let e=this.chunkOff+t,s,i;if(e>=0&&e=this.chunk2Pos&&sn.to&&(this.chunk2=this.chunk2.slice(0,n.to-s)),i=this.chunk2.charCodeAt(0)}}return s>=this.token.lookAhead&&(this.token.lookAhead=s+1),i}acceptToken(t,e=0){let s=e?this.resolveOffset(e,-1):this.pos;if(s==null||s=this.chunk2Pos&&this.posthis.range.to?t.slice(0,this.range.to-this.pos):t,this.chunkPos=this.pos,this.chunkOff=0}}readNext(){return this.chunkOff>=this.chunk.length&&(this.getChunk(),this.chunkOff==this.chunk.length)?this.next=-1:this.next=this.chunk.charCodeAt(this.chunkOff)}advance(t=1){for(this.chunkOff+=t;this.pos+t>=this.range.to;){if(this.rangeIndex==this.ranges.length-1)return this.setDone();t-=this.range.to-this.pos,this.range=this.ranges[++this.rangeIndex],this.pos=this.range.from}return this.pos+=t,this.pos>=this.token.lookAhead&&(this.token.lookAhead=this.pos+1),this.readNext()}setDone(){return this.pos=this.chunkPos=this.end,this.range=this.ranges[this.rangeIndex=this.ranges.length-1],this.chunk="",this.next=-1}reset(t,e){if(e?(this.token=e,e.start=t,e.lookAhead=t+1,e.value=e.extended=-1):this.token=D,this.pos!=t){if(this.pos=t,t==this.end)return this.setDone(),this;for(;t=this.range.to;)this.range=this.ranges[++this.rangeIndex];t>=this.chunkPos&&t=this.chunkPos&&e<=this.chunkPos+this.chunk.length)return this.chunk.slice(t-this.chunkPos,e-this.chunkPos);if(t>=this.chunk2Pos&&e<=this.chunk2Pos+this.chunk2.length)return this.chunk2.slice(t-this.chunk2Pos,e-this.chunk2Pos);if(t>=this.range.from&&e<=this.range.to)return this.input.read(t,e);let s="";for(let i of this.ranges){if(i.from>=e)break;i.to>t&&(s+=this.input.read(Math.max(i.from,t),Math.min(i.to,e)))}return s}}class m{constructor(t,e){this.data=t,this.id=e}token(t,e){let{parser:s}=e.p;F(this.data,t,e,this.id,s.data,s.tokenPrecTable)}}m.prototype.contextual=m.prototype.fallback=m.prototype.extend=!1;class J{constructor(t,e,s){this.precTable=e,this.elseToken=s,this.data=typeof t=="string"?x(t):t}token(t,e){let s=t.pos,i;for(;i=t.pos,F(this.data,t,e,0,this.data,this.precTable),!(t.token.value>-1);){if(this.elseToken==null)return;if(t.next<0)break;t.advance(),t.reset(i+1,t.token)}i>s&&(t.reset(s,t.token),t.acceptToken(this.elseToken,i-s))}}J.prototype.contextual=m.prototype.fallback=m.prototype.extend=!1;class tt{constructor(t,e={}){this.token=t,this.contextual=!!e.contextual,this.fallback=!!e.fallback,this.extend=!!e.extend}}function F(o,t,e,s,i,h){let r=0,n=1<0){let d=o[p];if(a.allows(d)&&(t.token.value==-1||t.token.value==d||K(d,t.token.value,i,h))){t.acceptToken(d);break}}let f=t.next,u=0,c=o[r+2];if(t.next<0&&c>u&&o[l+c*3-3]==65535&&o[l+c*3-3]==65535){r=o[l+c*3-1];continue t}for(;u>1,d=l+p+(p<<1),L=o[d],$=o[d+1]||65536;if(f=$)u=p+1;else{r=o[d+2],t.advance();continue t}}break}}function I(o,t,e){for(let s=t,i;(i=o[s])!=65535;s++)if(i==e)return s-t;return-1}function K(o,t,e,s){let i=I(e,s,t);return i<0||I(e,s,o)t)&&!s.type.isError)return e<0?Math.max(0,Math.min(s.to-1,t-25)):Math.min(o.length,Math.max(s.from+1,t+25));if(e<0?s.prevSibling():s.nextSibling())break;if(!s.parent())return e<0?0:o.length}}class Q{constructor(t,e){this.fragments=t,this.nodeSet=e,this.i=0,this.fragment=null,this.safeFrom=-1,this.safeTo=-1,this.trees=[],this.start=[],this.index=[],this.nextFragment()}nextFragment(){let t=this.fragment=this.i==this.fragments.length?null:this.fragments[this.i++];if(t){for(this.safeFrom=t.openStart?B(t.tree,t.from+t.offset,1)-t.offset:t.from,this.safeTo=t.openEnd?B(t.tree,t.to+t.offset,-1)-t.offset:t.to;this.trees.length;)this.trees.pop(),this.start.pop(),this.index.pop();this.trees.push(t.tree),this.start.push(-t.offset),this.index.push(0),this.nextStart=this.safeFrom}else this.nextStart=1e9}nodeAt(t){if(tt)return this.nextStart=r,null;if(h instanceof b){if(r==t){if(r=Math.max(this.safeFrom,t)&&(this.trees.push(h),this.start.push(r),this.index.push(0))}else this.index[e]++,this.nextStart=r+h.length}}}class V{constructor(t,e){this.stream=e,this.tokens=[],this.mainToken=null,this.actions=[],this.tokens=t.tokenizers.map(s=>new S)}getActions(t){let e=0,s=null,{parser:i}=t.p,{tokenizers:h}=i,r=i.stateSlot(t.state,3),n=t.curContext?t.curContext.hash:0,a=0;for(let l=0;lu.end+25&&(a=Math.max(u.lookAhead,a)),u.value!=0)){let c=e;if(u.extended>-1&&(e=this.addActions(t,u.extended,u.end,e)),e=this.addActions(t,u.value,u.end,e),!f.extend&&(s=u,e>c))break}}for(;this.actions.length>e;)this.actions.pop();return a&&t.setLookAhead(a),!s&&t.pos==this.stream.end&&(s=new S,s.value=t.p.parser.eofTerm,s.start=s.end=t.pos,e=this.addActions(t,s.value,s.end,e)),this.mainToken=s,this.actions}getMainToken(t){if(this.mainToken)return this.mainToken;let e=new S,{pos:s,p:i}=t;return e.start=s,e.end=Math.min(s+1,i.stream.end),e.value=s==i.stream.end?i.parser.eofTerm:0,e}updateCachedToken(t,e,s){let i=this.stream.clipPos(s.pos);if(e.token(this.stream.reset(i,t),s),t.value>-1){let{parser:h}=s.p;for(let r=0;r=0&&s.p.parser.dialect.allows(n>>1)){n&1?t.extended=n>>1:t.value=n>>1;break}}}else t.value=0,t.end=this.stream.clipPos(i+1)}putAction(t,e,s,i){for(let h=0;ht.bufferLength*4?new Q(s,t.nodeSet):null}get parsedPos(){return this.minStackPos}advance(){let t=this.stacks,e=this.minStackPos,s=this.stacks=[],i,h;if(this.bigReductionCount>300&&t.length==1){let[r]=t;for(;r.forceReduce()&&r.stack.length&&r.stack[r.stack.length-2]>=this.lastBigReductionStart;);this.bigReductionCount=this.lastBigReductionSize=0}for(let r=0;re)s.push(n);else{if(this.advanceStack(n,s,t))continue;{i||(i=[],h=[]),i.push(n);let a=this.tokens.getMainToken(n);h.push(a.value,a.end)}}break}}if(!s.length){let r=i&&Z(i);if(r)return this.stackToTree(r);if(this.parser.strict)throw g&&i&&console.log("Stuck with token "+(this.tokens.mainToken?this.parser.getName(this.tokens.mainToken.value):"none")),new SyntaxError("No parse at "+e);this.recovering||(this.recovering=5)}if(this.recovering&&i){let r=this.stoppedAt!=null&&i[0].pos>this.stoppedAt?i[0]:this.runRecovery(i,h,s);if(r)return this.stackToTree(r.forceAll())}if(this.recovering){let r=this.recovering==1?1:this.recovering*3;if(s.length>r)for(s.sort((n,a)=>a.score-n.score);s.length>r;)s.pop();s.some(n=>n.reducePos>e)&&this.recovering--}else if(s.length>1){t:for(let r=0;r500&&l.buffer.length>500)if((n.score-l.score||n.buffer.length-l.buffer.length)>0)s.splice(a--,1);else{s.splice(r--,1);continue t}}}s.length>12&&s.splice(12,s.length-12)}this.minStackPos=s[0].pos;for(let r=1;r ":"";if(this.stoppedAt!=null&&i>this.stoppedAt)return t.forceReduce()?t:null;if(this.fragments){let l=t.curContext&&t.curContext.tracker.strict,f=l?t.curContext.hash:0;for(let u=this.fragments.nodeAt(i);u;){let c=this.parser.nodeSet.types[u.type.id]==u.type?h.getGoto(t.state,u.type.id):-1;if(c>-1&&u.length&&(!l||(u.prop(w.contextHash)||0)==f))return t.useNode(u,c),g&&console.log(r+this.stackID(t)+` (via reuse of ${h.getName(u.type.id)})`),!0;if(!(u instanceof b)||u.children.length==0||u.positions[0]>0)break;let p=u.children[0];if(p instanceof b&&u.positions[0]==0)u=p;else break}}let n=h.stateSlot(t.state,4);if(n>0)return t.reduce(n),g&&console.log(r+this.stackID(t)+` (via always-reduce ${h.getName(n&65535)})`),!0;if(t.stack.length>=15e3)for(;t.stack.length>9e3&&t.forceReduce(););let a=this.tokens.getActions(t);for(let l=0;li?e.push(d):s.push(d)}return!1}advanceFully(t,e){let s=t.pos;for(;;){if(!this.advanceStack(t,null,null))return!1;if(t.pos>s)return R(t,e),!0}}runRecovery(t,e,s){let i=null,h=!1;for(let r=0;r ":"";if(n.deadEnd&&(h||(h=!0,n.restart(),g&&console.log(f+this.stackID(n)+" (restarted)"),this.advanceFully(n,s))))continue;let u=n.split(),c=f;for(let p=0;u.forceReduce()&&p<10&&(g&&console.log(c+this.stackID(u)+" (via force-reduce)"),!this.advanceFully(u,s));p++)g&&(c=this.stackID(u)+" -> ");for(let p of n.recoverByInsert(a))g&&console.log(f+this.stackID(p)+" (via recover-insert)"),this.advanceFully(p,s);this.stream.end>n.pos?(l==n.pos&&(l++,a=0),n.recoverByDelete(a,l),g&&console.log(f+this.stackID(n)+` (via recover-delete ${this.parser.getName(a)})`),R(n,s)):(!i||i.scoreo;class et{constructor(t){this.start=t.start,this.shift=t.shift||T,this.reduce=t.reduce||T,this.reuse=t.reuse||T,this.hash=t.hash||(()=>0),this.strict=t.strict!==!1}}class v extends j{constructor(t){if(super(),this.wrappers=[],t.version!=14)throw new RangeError(`Parser version (${t.version}) doesn't match runtime version (14)`);let e=t.nodeNames.split(" ");this.minRepeatTerm=e.length;for(let n=0;nt.topRules[n][1]),i=[];for(let n=0;n=0)h(f,a,n[l++]);else{let u=n[l+-f];for(let c=-f;c>0;c--)h(n[l++],a,u);l++}}}this.nodeSet=new G(e.map((n,a)=>E.define({name:a>=this.minRepeatTerm?void 0:n,id:a,props:i[a],top:s.indexOf(a)>-1,error:a==0,skipped:t.skippedNodes&&t.skippedNodes.indexOf(a)>-1}))),t.propSources&&(this.nodeSet=this.nodeSet.extend(...t.propSources)),this.strict=!1,this.bufferLength=U;let r=x(t.tokenData);this.context=t.context,this.specializerSpecs=t.specialized||[],this.specialized=new Uint16Array(this.specializerSpecs.length);for(let n=0;ntypeof n=="number"?new m(r,n):n),this.topRules=t.topRules,this.dialects=t.dialects||{},this.dynamicPrecedences=t.dynamicPrecedences||null,this.tokenPrecTable=t.tokenPrec,this.termNames=t.termNames||null,this.maxNode=this.nodeSet.types.length-1,this.dialect=this.parseDialect(),this.top=this.topRules[Object.keys(this.topRules)[0]]}createParse(t,e,s){let i=new X(this,t,e,s);for(let h of this.wrappers)i=h(i,t,e,s);return i}getGoto(t,e,s=!1){let i=this.goto;if(e>=i[0])return-1;for(let h=i[e+1];;){let r=i[h++],n=r&1,a=i[h++];if(n&&s)return a;for(let l=h+(r>>1);h0}validAction(t,e){if(e==this.stateSlot(t,4))return!0;for(let s=this.stateSlot(t,1);;s+=3){if(this.data[s]==65535)if(this.data[s+1]==1)s=k(this.data,s+2);else return!1;if(e==k(this.data,s+1))return!0}}nextStates(t){let e=[];for(let s=this.stateSlot(t,1);;s+=3){if(this.data[s]==65535)if(this.data[s+1]==1)s=k(this.data,s+2);else break;if(!(this.data[s+2]&1)){let i=this.data[s+1];e.some((h,r)=>r&1&&h==i)||e.push(this.data[s],i)}}return e}configure(t){let e=Object.assign(Object.create(v.prototype),this);if(t.props&&(e.nodeSet=this.nodeSet.extend(...t.props)),t.top){let s=this.topRules[t.top];if(!s)throw new RangeError(`Invalid top rule name ${t.top}`);e.top=s}return t.tokenizers&&(e.tokenizers=this.tokenizers.map(s=>{let i=t.tokenizers.find(h=>h.from==s);return i?i.to:s})),t.specializers&&(e.specializers=this.specializers.slice(),e.specializerSpecs=this.specializerSpecs.map((s,i)=>{let h=t.specializers.find(n=>n.from==s.external);if(!h)return s;let r=Object.assign(Object.assign({},s),{external:h.to});return e.specializers[i]=O(r),r})),t.contextTracker&&(e.context=t.contextTracker),t.dialect&&(e.dialect=this.parseDialect(t.dialect)),t.strict!=null&&(e.strict=t.strict),t.wrap&&(e.wrappers=e.wrappers.concat(t.wrap)),t.bufferLength!=null&&(e.bufferLength=t.bufferLength),e}hasWrappers(){return this.wrappers.length>0}getName(t){return this.termNames?this.termNames[t]:String(t<=this.maxNode&&this.nodeSet.types[t].name||t)}get eofTerm(){return this.maxNode+1}get topNode(){return this.nodeSet.types[this.top[1]]}dynamicPrecedence(t){let e=this.dynamicPrecedences;return e==null?0:e[t]||0}parseDialect(t){let e=Object.keys(this.dialects),s=e.map(()=>!1);if(t)for(let h of t.split(" ")){let r=e.indexOf(h);r>=0&&(s[r]=!0)}let i=null;for(let h=0;hs)&&e.p.parser.stateFlag(e.state,2)&&(!t||t.scoreo.external(e,s)<<1|t}return o.get}export{et as C,tt as E,v as L,J as a}; -//# sourceMappingURL=index-ae57ca19.js.map diff --git a/spaces/DaleChen/AutoGPT/tests/test_image_gen.py b/spaces/DaleChen/AutoGPT/tests/test_image_gen.py deleted file mode 100644 index 19c57e427d5c1b84aa7f72925733d0056ddf5268..0000000000000000000000000000000000000000 --- a/spaces/DaleChen/AutoGPT/tests/test_image_gen.py +++ /dev/null @@ -1,102 +0,0 @@ -import hashlib -import os -import unittest - -from PIL import Image - -from autogpt.commands.image_gen import generate_image, generate_image_with_sd_webui -from autogpt.config import Config -from autogpt.workspace import path_in_workspace - - -def lst(txt): - return txt.split(":")[1].strip() - - -@unittest.skipIf(os.getenv("CI"), "Skipping image generation tests") -class TestImageGen(unittest.TestCase): - def setUp(self): - self.config = Config() - - def test_dalle(self): - self.config.image_provider = "dalle" - - # Test using size 256 - result = lst(generate_image("astronaut riding a horse", 256)) - image_path = path_in_workspace(result) - self.assertTrue(image_path.exists()) - with Image.open(image_path) as img: - self.assertEqual(img.size, (256, 256)) - image_path.unlink() - - # Test using size 512 - result = lst(generate_image("astronaut riding a horse", 512)) - image_path = path_in_workspace(result) - with Image.open(image_path) as img: - self.assertEqual(img.size, (512, 512)) - image_path.unlink() - - def test_huggingface(self): - self.config.image_provider = "huggingface" - - # Test usin SD 1.4 model and size 512 - self.config.huggingface_image_model = "CompVis/stable-diffusion-v1-4" - result = lst(generate_image("astronaut riding a horse", 512)) - image_path = path_in_workspace(result) - self.assertTrue(image_path.exists()) - with Image.open(image_path) as img: - self.assertEqual(img.size, (512, 512)) - image_path.unlink() - - # Test using SD 2.1 768 model and size 768 - self.config.huggingface_image_model = "stabilityai/stable-diffusion-2-1" - result = lst(generate_image("astronaut riding a horse", 768)) - image_path = path_in_workspace(result) - with Image.open(image_path) as img: - self.assertEqual(img.size, (768, 768)) - image_path.unlink() - - def test_sd_webui(self): - self.config.image_provider = "sd_webui" - return - - # Test using size 128 - result = lst(generate_image_with_sd_webui("astronaut riding a horse", 128)) - image_path = path_in_workspace(result) - self.assertTrue(image_path.exists()) - with Image.open(image_path) as img: - self.assertEqual(img.size, (128, 128)) - image_path.unlink() - - # Test using size 64 and negative prompt - result = lst( - generate_image_with_sd_webui( - "astronaut riding a horse", - negative_prompt="horse", - size=64, - extra={"seed": 123}, - ) - ) - image_path = path_in_workspace(result) - with Image.open(image_path) as img: - self.assertEqual(img.size, (64, 64)) - neg_image_hash = hashlib.md5(img.tobytes()).hexdigest() - image_path.unlink() - - # Same test as above but without the negative prompt - result = lst( - generate_image_with_sd_webui( - "astronaut riding a horse", image_size=64, size=1, extra={"seed": 123} - ) - ) - image_path = path_in_workspace(result) - with Image.open(image_path) as img: - self.assertEqual(img.size, (64, 64)) - image_hash = hashlib.md5(img.tobytes()).hexdigest() - image_path.unlink() - - self.assertNotEqual(image_hash, neg_image_hash) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/Danielzero/GPT3.5/assets/custom.js b/spaces/Danielzero/GPT3.5/assets/custom.js deleted file mode 100644 index b8071034f3618c541e3f4169c7fc6d6593d56f44..0000000000000000000000000000000000000000 --- a/spaces/Danielzero/GPT3.5/assets/custom.js +++ /dev/null @@ -1,224 +0,0 @@ - -// custom javascript here - -const MAX_HISTORY_LENGTH = 32; - -var key_down_history = []; -var currentIndex = -1; -var user_input_ta; - -var gradioContainer = null; -var user_input_ta = null; -var user_input_tb = null; -var userInfoDiv = null; -var appTitleDiv = null; -var chatbot = null; -var apSwitch = null; - -var ga = document.getElementsByTagName("gradio-app"); -var targetNode = ga[0]; -var isInIframe = (window.self !== window.top); - -// gradio 页面加载好了么??? 我能动你的元素了么?? -function gradioLoaded(mutations) { - for (var i = 0; i < mutations.length; i++) { - if (mutations[i].addedNodes.length) { - gradioContainer = document.querySelector(".gradio-container"); - user_input_tb = document.getElementById('user_input_tb'); - userInfoDiv = document.getElementById("user_info"); - appTitleDiv = document.getElementById("app_title"); - chatbot = document.querySelector('#chuanhu_chatbot'); - apSwitch = document.querySelector('.apSwitch input[type="checkbox"]'); - - if (gradioContainer && apSwitch) { // gradioCainter 加载出来了没? - adjustDarkMode(); - } - if (user_input_tb) { // user_input_tb 加载出来了没? - selectHistory(); - } - if (userInfoDiv && appTitleDiv) { // userInfoDiv 和 appTitleDiv 加载出来了没? - setTimeout(showOrHideUserInfo(), 2000); - } - if (chatbot) { // chatbot 加载出来了没? - setChatbotHeight() - } - } - } -} - -function selectHistory() { - user_input_ta = user_input_tb.querySelector("textarea"); - if (user_input_ta) { - observer.disconnect(); // 停止监听 - // 在 textarea 上监听 keydown 事件 - user_input_ta.addEventListener("keydown", function (event) { - var value = user_input_ta.value.trim(); - // 判断按下的是否为方向键 - if (event.code === 'ArrowUp' || event.code === 'ArrowDown') { - // 如果按下的是方向键,且输入框中有内容,且历史记录中没有该内容,则不执行操作 - if (value && key_down_history.indexOf(value) === -1) - return; - // 对于需要响应的动作,阻止默认行为。 - event.preventDefault(); - var length = key_down_history.length; - if (length === 0) { - currentIndex = -1; // 如果历史记录为空,直接将当前选中的记录重置 - return; - } - if (currentIndex === -1) { - currentIndex = length; - } - if (event.code === 'ArrowUp' && currentIndex > 0) { - currentIndex--; - user_input_ta.value = key_down_history[currentIndex]; - } else if (event.code === 'ArrowDown' && currentIndex < length - 1) { - currentIndex++; - user_input_ta.value = key_down_history[currentIndex]; - } - user_input_ta.selectionStart = user_input_ta.value.length; - user_input_ta.selectionEnd = user_input_ta.value.length; - const input_event = new InputEvent("input", { bubbles: true, cancelable: true }); - user_input_ta.dispatchEvent(input_event); - } else if (event.code === "Enter") { - if (value) { - currentIndex = -1; - if (key_down_history.indexOf(value) === -1) { - key_down_history.push(value); - if (key_down_history.length > MAX_HISTORY_LENGTH) { - key_down_history.shift(); - } - } - } - } - }); - } -} - -function toggleUserInfoVisibility(shouldHide) { - if (userInfoDiv) { - if (shouldHide) { - userInfoDiv.classList.add("hideK"); - } else { - userInfoDiv.classList.remove("hideK"); - } - } -} -function showOrHideUserInfo() { - var sendBtn = document.getElementById("submit_btn"); - - // Bind mouse/touch events to show/hide user info - appTitleDiv.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - userInfoDiv.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - sendBtn.addEventListener("mouseenter", function () { - toggleUserInfoVisibility(false); - }); - - appTitleDiv.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - userInfoDiv.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - sendBtn.addEventListener("mouseleave", function () { - toggleUserInfoVisibility(true); - }); - - appTitleDiv.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - userInfoDiv.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - sendBtn.ontouchstart = function () { - toggleUserInfoVisibility(false); - }; - - appTitleDiv.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); - }; - userInfoDiv.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); - }; - sendBtn.ontouchend = function () { - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 3000); // Delay 1 second to hide user info - }; - - // Hide user info after 2 second - setTimeout(function () { - toggleUserInfoVisibility(true); - }, 2000); -} - -function toggleDarkMode(isEnabled) { - if (isEnabled) { - gradioContainer.classList.add("dark"); - document.body.style.setProperty("background-color", "var(--neutral-950)", "important"); - } else { - gradioContainer.classList.remove("dark"); - document.body.style.backgroundColor = ""; - } -} -function adjustDarkMode() { - const darkModeQuery = window.matchMedia("(prefers-color-scheme: dark)"); - - // 根据当前颜色模式设置初始状态 - apSwitch.checked = darkModeQuery.matches; - toggleDarkMode(darkModeQuery.matches); - // 监听颜色模式变化 - darkModeQuery.addEventListener("change", (e) => { - apSwitch.checked = e.matches; - toggleDarkMode(e.matches); - }); - // apSwitch = document.querySelector('.apSwitch input[type="checkbox"]'); - apSwitch.addEventListener("change", (e) => { - toggleDarkMode(e.target.checked); - }); -} - -function setChatbotHeight() { - const screenWidth = window.innerWidth; - const statusDisplay = document.querySelector('#status_display'); - const statusDisplayHeight = statusDisplay ? statusDisplay.offsetHeight : 0; - const wrap = chatbot.querySelector('.wrap'); - const vh = window.innerHeight * 0.01; - document.documentElement.style.setProperty('--vh', `${vh}px`); - if (isInIframe) { - chatbot.style.height = `700px`; - wrap.style.maxHeight = `calc(700px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))` - } else { - if (screenWidth <= 320) { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 150}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } else if (screenWidth <= 499) { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 100}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } else { - chatbot.style.height = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px)`; - wrap.style.maxHeight = `calc(var(--vh, 1vh) * 100 - ${statusDisplayHeight + 160}px - var(--line-sm) * 1rem - 2 * var(--block-label-margin))`; - } - } -} - -// 监视页面内部 DOM 变动 -var observer = new MutationObserver(function (mutations) { - gradioLoaded(mutations); -}); -observer.observe(targetNode, { childList: true, subtree: true }); - -// 监视页面变化 -window.addEventListener("DOMContentLoaded", function () { - isInIframe = (window.self !== window.top); -}); -window.addEventListener('resize', setChatbotHeight); -window.addEventListener('scroll', setChatbotHeight); -window.matchMedia("(prefers-color-scheme: dark)").addEventListener("change", adjustDarkMode); \ No newline at end of file diff --git a/spaces/Devaholic/fruit-demo/app.py b/spaces/Devaholic/fruit-demo/app.py deleted file mode 100644 index 97f28f20d80cec14fc1c4940b9b89f7102de756a..0000000000000000000000000000000000000000 --- a/spaces/Devaholic/fruit-demo/app.py +++ /dev/null @@ -1,43 +0,0 @@ -from tensorflow.keras.models import load_model -import numpy as np -import gradio as gr -from utils import remove_number - -model = load_model('main_model.h5') - -labels = ['Apple Braeburn', 'Apple Crimson Snow', 'Apple Golden 1', 'Apple Golden 2', 'Apple Golden 3', 'Apple Granny Smith', 'Apple Pink Lady', 'Apple Red 1', 'Apple Red 2', 'Apple Red 3', 'Apple Red Delicious', 'Apple Red Yellow 1', 'Apple Red Yellow 2', 'Apricot', 'Avocado', 'Avocado ripe', 'Banana', 'Banana Lady Finger', 'Banana Red', 'Beetroot', 'Blueberry', 'Cactus fruit', 'Cantaloupe 1', 'Cantaloupe 2', 'Carambula', 'Cauliflower', 'Cherry 1', 'Cherry 2', 'Cherry Rainier', 'Cherry Wax Black', 'Cherry Wax Red', 'Cherry Wax Yellow', 'Chestnut', 'Clementine', 'Cocos', 'Corn', 'Corn Husk', 'Cucumber Ripe', 'Cucumber Ripe 2', 'Dates', 'Eggplant', 'Fig', 'Ginger Root', 'Granadilla', 'Grape Blue', 'Grape Pink', 'Grape White', 'Grape White 2', 'Grape White 3', 'Grape White 4', 'Grapefruit Pink', 'Grapefruit White', 'Guava', 'Hazelnut', 'Huckleberry', 'Kaki', 'Kiwi', 'Kohlrabi', 'Kumquats', 'Lemon', 'Lemon Meyer', 'Limes', 'Lychee', 'Mandarine', 'Mango', 'Mango Red', 'Mangostan', 'Maracuja', 'Melon Piel de Sapo', 'Mulberry', 'Nectarine', 'Nectarine Flat', 'Nut Forest', 'Nut Pecan', 'Onion Red', 'Onion Red Peeled', 'Onion White', 'Orange', 'Papaya', 'Passion Fruit', 'Peach', 'Peach 2', 'Peach Flat', 'Pear', 'Pear 2', 'Pear Abate', 'Pear Forelle', 'Pear Kaiser', 'Pear Monster', 'Pear Red', 'Pear Stone', 'Pear Williams', 'Pepino', 'Pepper Green', 'Pepper Orange', 'Pepper Red', 'Pepper Yellow', 'Physalis', 'Physalis with Husk', 'Pineapple', 'Pineapple Mini', 'Pitahaya Red', 'Plum', 'Plum 2', 'Plum 3', 'Pomegranate', 'Pomelo Sweetie', 'Potato Red', 'Potato Red Washed', 'Potato Sweet', 'Potato White', 'Quince', 'Rambutan', 'Raspberry', 'Redcurrant', 'Salak', 'Strawberry', 'Strawberry Wedge', 'Tamarillo', 'Tangelo', 'Tomato 1', 'Tomato 2', 'Tomato 3', 'Tomato 4', 'Tomato Cherry Red', 'Tomato Heart', 'Tomato Maroon', 'Tomato not Ripened', 'Tomato Yellow', 'Walnut', 'Watermelon'] - -def get_prediction(image: np.ndarray) -> str: - """ - Get the prediction of the image - """ - image = image.reshape(1, 299, 299, 3) - image = image / 255.0 - - prediction = model.predict(image) - prediction = np.argmax(prediction) - - predicted_label = remove_number(labels[int(prediction)]) - return predicted_label - -def get_predicted_labels(image) -> dict: - """ - Get the labels - """ - image = image.reshape(1, 299, 299, 3) - image = image / 255.0 - - prediction = model.predict(image) - prediction = np.ravel(prediction) - - confidences = {label: float(prob) for label, prob in zip(labels, list(prediction))} - - return confidences - -if __name__ == '__main__': - app = gr.Interface( - fn=get_predicted_labels, - inputs=gr.Image(shape=(299, 299), image_mode='RGB', tool='select'), - outputs=gr.outputs.Label(num_top_classes=5) - ) - app.launch(share=True) \ No newline at end of file diff --git a/spaces/DhruvShek/chatlm/utils.py b/spaces/DhruvShek/chatlm/utils.py deleted file mode 100644 index 6fde4a947858dabce091ae59322cf01417eeb5f1..0000000000000000000000000000000000000000 --- a/spaces/DhruvShek/chatlm/utils.py +++ /dev/null @@ -1,91 +0,0 @@ -import torch -import torch.nn as nn -from torch.utils.data import Dataset -import torch.utils.data -import json - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -class Dataset(Dataset): - - def __init__(self): - - self.pairs = json.load(open('pairs_encoded.json')) - self.dataset_size = len(self.pairs) - - def __getitem__(self, i): - - question = torch.LongTensor(self.pairs[i][0]) - reply = torch.LongTensor(self.pairs[i][1]) - - return question, reply - - def __len__(self): - return self.dataset_size - - -def create_masks(question, reply_input, reply_target): - - def subsequent_mask(size): - mask = torch.triu(torch.ones(size, size)).transpose(0, 1).type(dtype=torch.uint8) - return mask.unsqueeze(0) - - question_mask = (question!=0).to(device) - question_mask = question_mask.unsqueeze(1).unsqueeze(1) # (batch_size, 1, 1, max_words) - - reply_input_mask = reply_input!=0 - reply_input_mask = reply_input_mask.unsqueeze(1) # (batch_size, 1, max_words) - reply_input_mask = reply_input_mask & subsequent_mask(reply_input.size(-1)).type_as(reply_input_mask.data) - reply_input_mask = reply_input_mask.unsqueeze(1) # (batch_size, 1, max_words, max_words) - reply_target_mask = reply_target!=0 # (batch_size, max_words) - - return question_mask, reply_input_mask, reply_target_mask - - -class AdamWarmup: - - def __init__(self, model_size, warmup_steps, optimizer): - - self.model_size = model_size - self.warmup_steps = warmup_steps - self.optimizer = optimizer - self.current_step = 0 - self.lr = 0 - - def get_lr(self): - return self.model_size ** (-0.5) * min(self.current_step ** (-0.5), self.current_step * self.warmup_steps ** (-1.5)) - - def step(self): - # Increment the number of steps each time we call the step function - self.current_step += 1 - lr = self.get_lr() - for param_group in self.optimizer.param_groups: - param_group['lr'] = lr - # update the learning rate - self.lr = lr - self.optimizer.step() - -class LossWithLS(nn.Module): - - def __init__(self, size, smooth): - super(LossWithLS, self).__init__() - self.criterion = nn.KLDivLoss(size_average=False, reduce=False) - self.confidence = 1.0 - smooth - self.smooth = smooth - self.size = size - - def forward(self, prediction, target, mask): - """ - prediction of shape: (batch_size, max_words, vocab_size) - target and mask of shape: (batch_size, max_words) - """ - prediction = prediction.view(-1, prediction.size(-1)) # (batch_size * max_words, vocab_size) - target = target.contiguous().view(-1) # (batch_size * max_words) - mask = mask.float() - mask = mask.view(-1) # (batch_size * max_words) - labels = prediction.data.clone() - labels.fill_(self.smooth / (self.size - 1)) - labels.scatter_(1, target.data.unsqueeze(1), self.confidence) - loss = self.criterion(prediction, labels) # (batch_size * max_words, vocab_size) - loss = (loss.sum(1) * mask).sum() / mask.sum() - return loss diff --git a/spaces/Dorado607/ChuanhuChatGPT/Dockerfile b/spaces/Dorado607/ChuanhuChatGPT/Dockerfile deleted file mode 100644 index 85d5045d5316ac160277af1e7d60afa823c0f953..0000000000000000000000000000000000000000 --- a/spaces/Dorado607/ChuanhuChatGPT/Dockerfile +++ /dev/null @@ -1,18 +0,0 @@ -FROM python:3.9-slim-buster as builder -RUN apt-get update \ - && apt-get install -y build-essential \ - && apt-get clean \ - && rm -rf /var/lib/apt/lists/* -COPY requirements.txt . -COPY requirements_advanced.txt . -RUN pip install --user --no-cache-dir -r requirements.txt -# RUN pip install --user --no-cache-dir -r requirements_advanced.txt - -FROM python:3.9-slim-buster -LABEL maintainer="iskoldt" -COPY --from=builder /root/.local /root/.local -ENV PATH=/root/.local/bin:$PATH -COPY . /app -WORKDIR /app -ENV dockerrun=yes -CMD ["python3", "-u", "ChuanhuChatbot.py","2>&1", "|", "tee", "/var/log/application.log"] diff --git a/spaces/Dorado607/ChuanhuChatGPT/modules/pdf_func.py b/spaces/Dorado607/ChuanhuChatGPT/modules/pdf_func.py deleted file mode 100644 index 1b1087f2687fd26c8676867dd45189c069dd56a5..0000000000000000000000000000000000000000 --- a/spaces/Dorado607/ChuanhuChatGPT/modules/pdf_func.py +++ /dev/null @@ -1,180 +0,0 @@ -from types import SimpleNamespace -import pdfplumber -import logging -from langchain.docstore.document import Document - -def prepare_table_config(crop_page): - """Prepare table查找边界, 要求page为原始page - - From https://github.com/jsvine/pdfplumber/issues/242 - """ - page = crop_page.root_page # root/parent - cs = page.curves + page.edges - def curves_to_edges(): - """See https://github.com/jsvine/pdfplumber/issues/127""" - edges = [] - for c in cs: - edges += pdfplumber.utils.rect_to_edges(c) - return edges - edges = curves_to_edges() - return { - "vertical_strategy": "explicit", - "horizontal_strategy": "explicit", - "explicit_vertical_lines": edges, - "explicit_horizontal_lines": edges, - "intersection_y_tolerance": 10, - } - -def get_text_outside_table(crop_page): - ts = prepare_table_config(crop_page) - if len(ts["explicit_vertical_lines"]) == 0 or len(ts["explicit_horizontal_lines"]) == 0: - return crop_page - - ### Get the bounding boxes of the tables on the page. - bboxes = [table.bbox for table in crop_page.root_page.find_tables(table_settings=ts)] - def not_within_bboxes(obj): - """Check if the object is in any of the table's bbox.""" - def obj_in_bbox(_bbox): - """See https://github.com/jsvine/pdfplumber/blob/stable/pdfplumber/table.py#L404""" - v_mid = (obj["top"] + obj["bottom"]) / 2 - h_mid = (obj["x0"] + obj["x1"]) / 2 - x0, top, x1, bottom = _bbox - return (h_mid >= x0) and (h_mid < x1) and (v_mid >= top) and (v_mid < bottom) - return not any(obj_in_bbox(__bbox) for __bbox in bboxes) - - return crop_page.filter(not_within_bboxes) -# 请使用 LaTeX 表达公式,行内公式以 $ 包裹,行间公式以 $$ 包裹 - -extract_words = lambda page: page.extract_words(keep_blank_chars=True, y_tolerance=0, x_tolerance=1, extra_attrs=["fontname", "size", "object_type"]) -# dict_keys(['text', 'x0', 'x1', 'top', 'doctop', 'bottom', 'upright', 'direction', 'fontname', 'size']) - -def get_title_with_cropped_page(first_page): - title = [] # 处理标题 - x0,top,x1,bottom = first_page.bbox # 获取页面边框 - - for word in extract_words(first_page): - word = SimpleNamespace(**word) - - if word.size >= 14: - title.append(word.text) - title_bottom = word.bottom - elif word.text == "Abstract": # 获取页面abstract - top = word.top - - user_info = [i["text"] for i in extract_words(first_page.within_bbox((x0,title_bottom,x1,top)))] - # 裁剪掉上半部分, within_bbox: full_included; crop: partial_included - return title, user_info, first_page.within_bbox((x0,top,x1,bottom)) - -def get_column_cropped_pages(pages, two_column=True): - new_pages = [] - for page in pages: - if two_column: - left = page.within_bbox((0, 0, page.width/2, page.height),relative=True) - right = page.within_bbox((page.width/2, 0, page.width, page.height), relative=True) - new_pages.append(left) - new_pages.append(right) - else: - new_pages.append(page) - - return new_pages - -def parse_pdf(filename, two_column = True): - level = logging.getLogger().level - if level == logging.getLevelName("DEBUG"): - logging.getLogger().setLevel("INFO") - - with pdfplumber.open(filename) as pdf: - title, user_info, first_page = get_title_with_cropped_page(pdf.pages[0]) - new_pages = get_column_cropped_pages([first_page] + pdf.pages[1:], two_column) - - chapters = [] - # tuple (chapter_name, [pageid] (start,stop), chapter_text) - create_chapter = lambda page_start,name_top,name_bottom: SimpleNamespace( - name=[], - name_top=name_top, - name_bottom=name_bottom, - record_chapter_name = True, - - page_start=page_start, - page_stop=None, - - text=[], - ) - cur_chapter = None - - # 按页遍历PDF文档 - for idx, page in enumerate(new_pages): - page = get_text_outside_table(page) - - # 按行遍历页面文本 - for word in extract_words(page): - word = SimpleNamespace(**word) - - # 检查行文本是否以12号字体打印,如果是,则将其作为新章节开始 - if word.size >= 11: # 出现chapter name - if cur_chapter is None: - cur_chapter = create_chapter(page.page_number, word.top, word.bottom) - elif not cur_chapter.record_chapter_name or (cur_chapter.name_bottom != cur_chapter.name_bottom and cur_chapter.name_top != cur_chapter.name_top): - # 不再继续写chapter name - cur_chapter.page_stop = page.page_number # stop id - chapters.append(cur_chapter) - # 重置当前chapter信息 - cur_chapter = create_chapter(page.page_number, word.top, word.bottom) - - # print(word.size, word.top, word.bottom, word.text) - cur_chapter.name.append(word.text) - else: - cur_chapter.record_chapter_name = False # chapter name 结束 - cur_chapter.text.append(word.text) - else: - # 处理最后一个章节 - cur_chapter.page_stop = page.page_number # stop id - chapters.append(cur_chapter) - - for i in chapters: - logging.info(f"section: {i.name} pages:{i.page_start, i.page_stop} word-count:{len(i.text)}") - logging.debug(" ".join(i.text)) - - title = " ".join(title) - user_info = " ".join(user_info) - text = f"Article Title: {title}, Information:{user_info}\n" - for idx, chapter in enumerate(chapters): - chapter.name = " ".join(chapter.name) - text += f"The {idx}th Chapter {chapter.name}: " + " ".join(chapter.text) + "\n" - - logging.getLogger().setLevel(level) - return Document(page_content=text, metadata={"title": title}) - -BASE_POINTS = """ -1. Who are the authors? -2. What is the process of the proposed method? -3. What is the performance of the proposed method? Please note down its performance metrics. -4. What are the baseline models and their performances? Please note down these baseline methods. -5. What dataset did this paper use? -""" - -READING_PROMPT = """ -You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n -Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n -When you are reading, You need to focus on these key points:{} -""" - -READING_PROMT_V2 = """ -You are a researcher helper bot. You can help the user with research paper reading and summarizing. \n -Now I am going to send you a paper. You need to read it and summarize it for me part by part. \n -When you are reading, You need to focus on these key points:{}, - -And You need to generate a brief but informative title for this part. -Your return format: -- title: '...' -- summary: '...' -""" - -SUMMARY_PROMPT = "You are a researcher helper bot. Now you need to read the summaries of a research paper." - - -if __name__ == '__main__': - # Test code - z = parse_pdf("./build/test.pdf") - print(z["user_info"]) - print(z["title"]) \ No newline at end of file diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/mapper/training/__init__.py b/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/mapper/training/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_s_mix_det.py b/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_s_mix_det.py deleted file mode 100644 index 95f1810872b9cefd4a4d5c21c45df7b9747a24aa..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_s_mix_det.py +++ /dev/null @@ -1,138 +0,0 @@ -# encoding: utf-8 -import os -import random -import torch -import torch.nn as nn -import torch.distributed as dist - -from yolox.exp import Exp as MyExp -from yolox.data import get_yolox_datadir - -class Exp(MyExp): - def __init__(self): - super(Exp, self).__init__() - self.num_classes = 1 - self.depth = 0.33 - self.width = 0.50 - self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0] - self.train_ann = "train.json" - self.val_ann = "train.json" - self.input_size = (608, 1088) - self.test_size = (608, 1088) - self.random_size = (12, 26) - self.max_epoch = 80 - self.print_interval = 20 - self.eval_interval = 5 - self.test_conf = 0.001 - self.nmsthre = 0.7 - self.no_aug_epochs = 10 - self.basic_lr_per_img = 0.001 / 64.0 - self.warmup_epochs = 1 - - def get_data_loader(self, batch_size, is_distributed, no_aug=False): - from yolox.data import ( - MOTDataset, - TrainTransform, - YoloBatchSampler, - DataLoader, - InfiniteSampler, - MosaicDetection, - ) - - dataset = MOTDataset( - data_dir=os.path.join(get_yolox_datadir(), "mix_det"), - json_file=self.train_ann, - name='', - img_size=self.input_size, - preproc=TrainTransform( - rgb_means=(0.485, 0.456, 0.406), - std=(0.229, 0.224, 0.225), - max_labels=500, - ), - ) - - dataset = MosaicDetection( - dataset, - mosaic=not no_aug, - img_size=self.input_size, - preproc=TrainTransform( - rgb_means=(0.485, 0.456, 0.406), - std=(0.229, 0.224, 0.225), - max_labels=1000, - ), - degrees=self.degrees, - translate=self.translate, - scale=self.scale, - shear=self.shear, - perspective=self.perspective, - enable_mixup=self.enable_mixup, - ) - - self.dataset = dataset - - if is_distributed: - batch_size = batch_size // dist.get_world_size() - - sampler = InfiniteSampler( - len(self.dataset), seed=self.seed if self.seed else 0 - ) - - batch_sampler = YoloBatchSampler( - sampler=sampler, - batch_size=batch_size, - drop_last=False, - input_dimension=self.input_size, - mosaic=not no_aug, - ) - - dataloader_kwargs = {"num_workers": self.data_num_workers, "pin_memory": True} - dataloader_kwargs["batch_sampler"] = batch_sampler - train_loader = DataLoader(self.dataset, **dataloader_kwargs) - - return train_loader - - def get_eval_loader(self, batch_size, is_distributed, testdev=False): - from yolox.data import MOTDataset, ValTransform - - valdataset = MOTDataset( - data_dir=os.path.join(get_yolox_datadir(), "mot"), - json_file=self.val_ann, - img_size=self.test_size, - name='train', - preproc=ValTransform( - rgb_means=(0.485, 0.456, 0.406), - std=(0.229, 0.224, 0.225), - ), - ) - - if is_distributed: - batch_size = batch_size // dist.get_world_size() - sampler = torch.utils.data.distributed.DistributedSampler( - valdataset, shuffle=False - ) - else: - sampler = torch.utils.data.SequentialSampler(valdataset) - - dataloader_kwargs = { - "num_workers": self.data_num_workers, - "pin_memory": True, - "sampler": sampler, - } - dataloader_kwargs["batch_size"] = batch_size - val_loader = torch.utils.data.DataLoader(valdataset, **dataloader_kwargs) - - return val_loader - - def get_evaluator(self, batch_size, is_distributed, testdev=False): - from yolox.evaluators import COCOEvaluator - - val_loader = self.get_eval_loader(batch_size, is_distributed, testdev=testdev) - evaluator = COCOEvaluator( - dataloader=val_loader, - img_size=self.test_size, - confthre=self.test_conf, - nmsthre=self.nmsthre, - num_classes=self.num_classes, - testdev=testdev, - ) - return evaluator diff --git a/spaces/Egrt/GCycleGAN/utils/utils_fit.py b/spaces/Egrt/GCycleGAN/utils/utils_fit.py deleted file mode 100644 index c57a55ffa174d24a1d2b99b5a50d0f668fe176df..0000000000000000000000000000000000000000 --- a/spaces/Egrt/GCycleGAN/utils/utils_fit.py +++ /dev/null @@ -1,249 +0,0 @@ -import os -import torch -import torch.nn.functional as F -from tqdm import tqdm -from nets.cyclegan import compute_gradient_penalty -from utils.utils import get_lr, show_result - - -def fit_one_epoch(G_model_A2B_train, G_model_B2A_train, D_model_A_train, D_model_B_train, G_model_A2B, G_model_B2A, D_model_A, D_model_B, VGG_feature_model, ResNeSt_model, loss_history, - G_optimizer, D_optimizer_A, D_optimizer_B, BCE_loss, L1_loss, Face_loss, epoch, epoch_step, gen, Epoch, cuda, fp16, scaler, save_period, save_dir, photo_save_step, local_rank=0): - G_total_loss = 0 - D_total_loss_A = 0 - D_total_loss_B = 0 - - if local_rank == 0: - print('Start Train') - pbar = tqdm(total=epoch_step,desc=f'Epoch {epoch + 1}/{Epoch}',postfix=dict,mininterval=0.3) - for iteration, batch in enumerate(gen): - if iteration >= epoch_step: - break - - images_A, images_B = batch[0], batch[1] - batch_size = images_A.size()[0] - y_real = torch.ones(batch_size) - y_fake = torch.zeros(batch_size) - - with torch.no_grad(): - if cuda: - images_A, images_B, y_real, y_fake = images_A.cuda(local_rank), images_B.cuda(local_rank), y_real.cuda(local_rank), y_fake.cuda(local_rank) - - if not fp16: - #---------------------------------# - # 训练生成器A2B和B2A - #---------------------------------# - G_optimizer.zero_grad() - - Same_B = G_model_A2B_train(images_B) - loss_identity_B = L1_loss(Same_B, images_B) - - Same_A = G_model_B2A_train(images_A) - loss_identity_A = L1_loss(Same_A, images_A) - - fake_B = G_model_A2B_train(images_A) - pred_real = D_model_B_train(images_B) - pred_fake = D_model_B_train(fake_B) - pred_rf = pred_real - pred_fake.mean() - pred_fr = pred_fake - pred_real.mean() - D_train_loss_rf = BCE_loss(pred_rf, y_fake) - D_train_loss_fr = BCE_loss(pred_fr, y_real) - loss_GAN_A2B = (D_train_loss_rf + D_train_loss_fr) / 2 - - fake_A = G_model_B2A_train(images_B) - pred_real = D_model_A_train(images_A) - pred_fake = D_model_A_train(fake_A) - pred_rf = pred_real - pred_fake.mean() - pred_fr = pred_fake - pred_real.mean() - D_train_loss_rf = BCE_loss(pred_rf, y_fake) - D_train_loss_fr = BCE_loss(pred_fr, y_real) - loss_GAN_B2A = (D_train_loss_rf + D_train_loss_fr) / 2 - - recovered_A = G_model_B2A_train(fake_B) - loss_cycle_ABA = L1_loss(recovered_A, images_A) - - loss_per_ABA = L1_loss(VGG_feature_model(recovered_A), VGG_feature_model(images_A)) - - recovered_A_face = F.interpolate(recovered_A, size=(112, 112), mode='bicubic', align_corners=True) - images_A_face = F.interpolate(images_A, size=(112, 112), mode='bicubic', align_corners=True) - loss_face_ABA = torch.mean(1. - Face_loss(ResNeSt_model(recovered_A_face), ResNeSt_model(images_A_face))) - - recovered_B = G_model_A2B_train(fake_A) - loss_cycle_BAB = L1_loss(recovered_B, images_B) - - loss_per_BAB = L1_loss(VGG_feature_model(recovered_B), VGG_feature_model(images_B)) - - recovered_B_face = F.interpolate(recovered_B, size=(112, 112), mode='bicubic', align_corners=True) - images_B_face = F.interpolate(images_B, size=(112, 112), mode='bicubic', align_corners=True) - loss_face_BAB = torch.mean(1. - Face_loss(ResNeSt_model(recovered_B_face), ResNeSt_model(images_B_face))) - - G_loss = loss_identity_A * 5.0 + loss_identity_B * 5.0 + loss_GAN_A2B + loss_GAN_B2A + loss_per_ABA * 2.5 \ - + loss_per_BAB *2.5 + loss_cycle_ABA * 10.0 + loss_cycle_BAB * 10.0 + loss_face_ABA * 5 + loss_face_BAB * 5 - G_loss.backward() - G_optimizer.step() - - #---------------------------------# - # 训练评价器A - #---------------------------------# - D_optimizer_A.zero_grad() - pred_real = D_model_A_train(images_A) - pred_fake = D_model_A_train(fake_A.detach()) - pred_rf = pred_real - pred_fake.mean() - pred_fr = pred_fake - pred_real.mean() - D_train_loss_rf = BCE_loss(pred_rf, y_real) - D_train_loss_fr = BCE_loss(pred_fr, y_fake) - gradient_penalty = compute_gradient_penalty(D_model_A_train, images_A, fake_A.detach()) - - D_loss_A = 10 * gradient_penalty + (D_train_loss_rf + D_train_loss_fr) / 2 - D_loss_A.backward() - D_optimizer_A.step() - - #---------------------------------# - # 训练评价器B - #---------------------------------# - D_optimizer_B.zero_grad() - - pred_real = D_model_B_train(images_B) - pred_fake = D_model_B_train(fake_B.detach()) - pred_rf = pred_real - pred_fake.mean() - pred_fr = pred_fake - pred_real.mean() - D_train_loss_rf = BCE_loss(pred_rf, y_real) - D_train_loss_fr = BCE_loss(pred_fr, y_fake) - gradient_penalty = compute_gradient_penalty(D_model_B_train, images_B, fake_B.detach()) - - D_loss_B = 10 * gradient_penalty + (D_train_loss_rf + D_train_loss_fr) / 2 - D_loss_B.backward() - D_optimizer_B.step() - - else: - from torch.cuda.amp import autocast - - #---------------------------------# - # 训练生成器A2B和B2A - #---------------------------------# - with autocast(): - G_optimizer.zero_grad() - Same_B = G_model_A2B_train(images_B) - loss_identity_B = L1_loss(Same_B, images_B) - - Same_A = G_model_B2A_train(images_A) - loss_identity_A = L1_loss(Same_A, images_A) - - fake_B = G_model_A2B_train(images_A) - pred_real = D_model_B_train(images_B) - pred_fake = D_model_B_train(fake_B) - pred_rf = pred_real - pred_fake.mean() - pred_fr = pred_fake - pred_real.mean() - D_train_loss_rf = BCE_loss(pred_rf, y_fake) - D_train_loss_fr = BCE_loss(pred_fr, y_real) - loss_GAN_A2B = (D_train_loss_rf + D_train_loss_fr) / 2 - - fake_A = G_model_B2A_train(images_B) - pred_real = D_model_A_train(images_A) - pred_fake = D_model_A_train(fake_A) - pred_rf = pred_real - pred_fake.mean() - pred_fr = pred_fake - pred_real.mean() - D_train_loss_rf = BCE_loss(pred_rf, y_fake) - D_train_loss_fr = BCE_loss(pred_fr, y_real) - loss_GAN_B2A = (D_train_loss_rf + D_train_loss_fr) / 2 - - recovered_A = G_model_B2A_train(fake_B) - loss_cycle_ABA = L1_loss(recovered_A, images_A) - recovered_A_face = F.interpolate(recovered_A, size=(112, 112), mode='bicubic', align_corners=True) - images_A_face = F.interpolate(images_A, size=(112, 112), mode='bicubic', align_corners=True) - loss_face_ABA = torch.mean(1. - Face_loss(ResNeSt_model(recovered_A_face), ResNeSt_model(images_A_face))) - - recovered_B = G_model_A2B_train(fake_A) - loss_cycle_BAB = L1_loss(recovered_B, images_B) - recovered_B_face = F.interpolate(recovered_B, size=(112, 112), mode='bicubic', align_corners=True) - images_B_face = F.interpolate(images_B, size=(112, 112), mode='bicubic', align_corners=True) - loss_face_BAB = torch.mean(1. - Face_loss(ResNeSt_model(recovered_B_face), ResNeSt_model(images_B_face))) - - G_loss = loss_identity_A * 5.0 + loss_identity_B * 5.0 + loss_GAN_A2B + loss_GAN_B2A \ - + loss_cycle_ABA * 10.0 + loss_cycle_BAB * 10.0 + loss_face_ABA * 5 + loss_face_BAB * 5 - #----------------------# - # 反向传播 - #----------------------# - scaler.scale(G_loss).backward() - scaler.step(G_optimizer) - scaler.update() - - #---------------------------------# - # 训练评价器A - #---------------------------------# - with autocast(): - D_optimizer_A.zero_grad() - pred_real = D_model_A_train(images_A) - pred_fake = D_model_A_train(fake_A.detach()) - pred_rf = pred_real - pred_fake.mean() - pred_fr = pred_fake - pred_real.mean() - D_train_loss_rf = BCE_loss(pred_rf, y_real) - D_train_loss_fr = BCE_loss(pred_fr, y_fake) - gradient_penalty = compute_gradient_penalty(D_model_A_train, images_A, fake_A.detach()) - - D_loss_A = 10 * gradient_penalty + (D_train_loss_rf + D_train_loss_fr) / 2 - #----------------------# - # 反向传播 - #----------------------# - scaler.scale(D_loss_A).backward() - scaler.step(D_optimizer_A) - scaler.update() - - #---------------------------------# - # 训练评价器B - #---------------------------------# - with autocast(): - D_optimizer_B.zero_grad() - - pred_real = D_model_B_train(images_B) - pred_fake = D_model_B_train(fake_B.detach()) - pred_rf = pred_real - pred_fake.mean() - pred_fr = pred_fake - pred_real.mean() - D_train_loss_rf = BCE_loss(pred_rf, y_real) - D_train_loss_fr = BCE_loss(pred_fr, y_fake) - gradient_penalty = compute_gradient_penalty(D_model_B_train, images_B, fake_B.detach()) - - D_loss_B = 10 * gradient_penalty + (D_train_loss_rf + D_train_loss_fr) / 2 - #----------------------# - # 反向传播 - #----------------------# - scaler.scale(D_loss_B).backward() - scaler.step(D_optimizer_B) - scaler.update() - - G_total_loss += G_loss.item() - D_total_loss_A += D_loss_A.item() - D_total_loss_B += D_loss_B.item() - - if local_rank == 0: - pbar.set_postfix(**{'G_loss' : G_total_loss / (iteration + 1), - 'D_loss_A' : D_total_loss_A / (iteration + 1), - 'D_loss_B' : D_total_loss_B / (iteration + 1), - 'lr' : get_lr(G_optimizer)}) - pbar.update(1) - - if iteration % photo_save_step == 0: - show_result(epoch + 1, G_model_A2B, G_model_B2A, images_A, images_B) - - G_total_loss = G_total_loss / epoch_step - D_total_loss_A = D_total_loss_A / epoch_step - D_total_loss_B = D_total_loss_B / epoch_step - - if local_rank == 0: - pbar.close() - print('Epoch:'+ str(epoch + 1) + '/' + str(Epoch)) - print('G Loss: %.4f || D Loss A: %.4f || D Loss B: %.4f ' % (G_total_loss, D_total_loss_A, D_total_loss_B)) - loss_history.append_loss(epoch + 1, G_total_loss = G_total_loss, D_total_loss_A = D_total_loss_A, D_total_loss_B = D_total_loss_B) - - #-----------------------------------------------# - # 保存权值 - #-----------------------------------------------# - if (epoch + 1) % save_period == 0 or epoch + 1 == Epoch: - torch.save(G_model_A2B.state_dict(), os.path.join(save_dir, 'G_model_A2B_Epoch%d-GLoss%.4f-DALoss%.4f-DBLoss%.4f.pth'%(epoch + 1, G_total_loss, D_total_loss_A, D_total_loss_B))) - torch.save(G_model_B2A.state_dict(), os.path.join(save_dir, 'G_model_B2A_Epoch%d-GLoss%.4f-DALoss%.4f-DBLoss%.4f.pth'%(epoch + 1, G_total_loss, D_total_loss_A, D_total_loss_B))) - torch.save(D_model_A.state_dict(), os.path.join(save_dir, 'D_model_A_Epoch%d-GLoss%.4f-DALoss%.4f-DBLoss%.4f.pth'%(epoch + 1, G_total_loss, D_total_loss_A, D_total_loss_B))) - torch.save(D_model_B.state_dict(), os.path.join(save_dir, 'D_model_B_Epoch%d-GLoss%.4f-DALoss%.4f-DBLoss%.4f.pth'%(epoch + 1, G_total_loss, D_total_loss_A, D_total_loss_B))) - - torch.save(G_model_A2B.state_dict(), os.path.join(save_dir, "G_model_A2B_last_epoch_weights.pth")) - torch.save(G_model_B2A.state_dict(), os.path.join(save_dir, "G_model_B2A_last_epoch_weights.pth")) - torch.save(D_model_A.state_dict(), os.path.join(save_dir, "D_model_A_last_epoch_weights.pth")) - torch.save(D_model_B.state_dict(), os.path.join(save_dir, "D_model_B_last_epoch_weights.pth")) \ No newline at end of file diff --git a/spaces/EnigmaOfTheWorld/GenZBot/README.md b/spaces/EnigmaOfTheWorld/GenZBot/README.md deleted file mode 100644 index d08c4ce8c09c3e0deebb7b5637f61c34c227b466..0000000000000000000000000000000000000000 --- a/spaces/EnigmaOfTheWorld/GenZBot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: GenZBot -emoji: 📚 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EronSamez/RVC_HFmeu/lib/infer_pack/modules/F0Predictor/__init__.py b/spaces/EronSamez/RVC_HFmeu/lib/infer_pack/modules/F0Predictor/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/models.py b/spaces/FrankZxShen/vits-fast-finetuning-umamusume/models.py deleted file mode 100644 index 65f9ae5255616efa19a4f28bc0a840d4c453a060..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-finetuning-umamusume/models.py +++ /dev/null @@ -1,722 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class TextEncoder_lora(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels, r=4) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder_lora( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers >= 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - - -class SynthesizerTrn_lora(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder_lora(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers >= 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) \ No newline at end of file diff --git a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/audio.py b/spaces/FridaZuley/RVC_HFKawaii/infer/lib/audio.py deleted file mode 100644 index 9ad4ff74218957cf18782fa71add40a734b47e78..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/infer/lib/audio.py +++ /dev/null @@ -1,197 +0,0 @@ -import librosa -import numpy as np -import av -from io import BytesIO -import ffmpeg -import os -import sys - -import random -from infer.lib.csvutil import CSVutil -#import csv - -platform_stft_mapping = { - 'linux': 'stftpitchshift', - 'darwin': 'stftpitchshift', - 'win32': 'stftpitchshift.exe', -} - -stft = platform_stft_mapping.get(sys.platform) - -def wav2(i, o, format): - inp = av.open(i, 'rb') - if format == "m4a": format = "mp4" - out = av.open(o, 'wb', format=format) - if format == "ogg": format = "libvorbis" - if format == "mp4": format = "aac" - - ostream = out.add_stream(format) - - for frame in inp.decode(audio=0): - for p in ostream.encode(frame): out.mux(p) - - for p in ostream.encode(None): out.mux(p) - - out.close() - inp.close() - -def audio2(i, o, format, sr): - inp = av.open(i, 'rb') - out = av.open(o, 'wb', format=format) - if format == "ogg": format = "libvorbis" - if format == "f32le": format = "pcm_f32le" - - ostream = out.add_stream(format, channels=1) - ostream.sample_rate = sr - - for frame in inp.decode(audio=0): - for p in ostream.encode(frame): out.mux(p) - - out.close() - inp.close() - -def load_audion(file, sr): - try: - file = ( - file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - with open(file, "rb") as f: - with BytesIO() as out: - audio2(f, out, "f32le", sr) - return np.frombuffer(out.getvalue(), np.float32).flatten() - - except AttributeError: - audio = file[1] / 32768.0 - if len(audio.shape) == 2: - audio = np.mean(audio, -1) - return librosa.resample(audio, orig_sr=file[0], target_sr=16000) - - except Exception as e: - raise RuntimeError(f"Failed to load audio: {e}") - - - - -def load_audio(file, sr, DoFormant=False, Quefrency=1.0, Timbre=1.0): - converted = False - DoFormant, Quefrency, Timbre = CSVutil("csvdb/formanting.csv", "r", "formanting") - try: - # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26 - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - file = ( - file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - file_formanted = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - - # print(f"dofor={bool(DoFormant)} timbr={Timbre} quef={Quefrency}\n") - - if ( - lambda DoFormant: True - if DoFormant.lower() == "true" - else (False if DoFormant.lower() == "false" else DoFormant) - )(DoFormant): - numerator = round(random.uniform(1, 4), 4) - # os.system(f"stftpitchshift -i {file} -q {Quefrency} -t {Timbre} -o {file_formanted}") - # print('stftpitchshift -i "%s" -p 1.0 --rms -w 128 -v 8 -q %s -t %s -o "%s"' % (file, Quefrency, Timbre, file_formanted)) - - if not file.endswith(".wav"): - if not os.path.isfile(f"{file_formanted}.wav"): - converted = True - # print(f"\nfile = {file}\n") - # print(f"\nfile_formanted = {file_formanted}\n") - converting = ( - ffmpeg.input(file_formanted, threads=0) - .output(f"{file_formanted}.wav") - .run( - cmd=["ffmpeg", "-nostdin"], - capture_stdout=True, - capture_stderr=True, - ) - ) - else: - pass - - file_formanted = ( - f"{file_formanted}.wav" - if not file_formanted.endswith(".wav") - else file_formanted - ) - - print(f" · Formanting {file_formanted}...\n") - - os.system( - '%s -i "%s" -q "%s" -t "%s" -o "%sFORMANTED_%s.wav"' - % ( - stft, - file_formanted, - Quefrency, - Timbre, - file_formanted, - str(numerator), - ) - ) - - print(f" · Formanted {file_formanted}!\n") - - # filepraat = (os.path.abspath(os.getcwd()) + '\\' + file).replace('/','\\') - # file_formantedpraat = ('"' + os.path.abspath(os.getcwd()) + '/' + 'formanted'.join(file_formanted) + '"').replace('/','\\') - # print("%sFORMANTED_%s.wav" % (file_formanted, str(numerator))) - - out, _ = ( - ffmpeg.input( - "%sFORMANTED_%s.wav" % (file_formanted, str(numerator)), threads=0 - ) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run( - cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True - ) - ) - - try: - os.remove("%sFORMANTED_%s.wav" % (file_formanted, str(numerator))) - except Exception: - pass - print("couldn't remove formanted type of file") - - else: - out, _ = ( - ffmpeg.input(file, threads=0) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run( - cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True - ) - ) - except Exception as e: - raise RuntimeError(f"Failed to load audio: {e}") - - if converted: - try: - os.remove(file_formanted) - except Exception: - pass - print("couldn't remove converted type of file") - converted = False - - return np.frombuffer(out, np.float32).flatten() - - -def check_audio_duration(file): - try: - file = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - - probe = ffmpeg.probe(file) - - duration = float(probe['streams'][0]['duration']) - - if duration < 0.76: - print( - f"\n------------\n" - f"Audio file, {file.split('/')[-1]}, under ~0.76s detected - file is too short. Target at least 1-2s for best results." - f"\n------------\n\n" - ) - return False - - return True - except Exception as e: - raise RuntimeError(f"Failed to check audio duration: {e}") \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/cylinder_stand_alignment.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/cylinder_stand_alignment.py deleted file mode 100644 index 7c5bda6db24e6541249a47894f7c3ae6d17a0df1..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/cylinder_stand_alignment.py +++ /dev/null @@ -1,58 +0,0 @@ -import numpy as np -import os -import pybullet as p -import random -from cliport.tasks import primitives -from cliport.tasks.grippers import Spatula -from cliport.tasks.task import Task -from cliport.utils import utils -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils - -class CylinderStandAlignment(Task): - """Arrange four colored cylinders (red, blue, green, yellow) in order of their colors on four stands of matching color.""" - - def __init__(self): - super().__init__() - self.max_steps = 20 - self.lang_template = "Arrange the {color} cylinder on the {color} stand" - self.task_completed_desc = "done arranging cylinders." - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Define colors and corresponding names - colors = [utils.COLORS['red'], utils.COLORS['blue'], utils.COLORS['green'], utils.COLORS['yellow']] - color_names = ['red', 'blue', 'green', 'yellow'] - - # Add cylinders. - # x, y, z dimensions for the asset size - cylinder_size = (0.04, 0.04, 0.04) - cylinder_urdf = 'cylinder/cylinder-template.urdf' - cylinders = [] - for i in range(4): - cylinder_pose = self.get_random_pose(env, cylinder_size) - replace = {'DIM': cylinder_size, 'HALF': (cylinder_size[0] / 2, cylinder_size[1] / 2, cylinder_size[2] / 2), - 'COLOR': colors[i]} - # IMPORTANT: REPLACE THE TEMPLATE URDF - urdf = self.fill_template(cylinder_urdf, replace) - cylinder_id = env.add_object(urdf, cylinder_pose) - cylinders.append(cylinder_id) - - # Add stands. - # x, y, z dimensions for the asset size - stand_size = (0.05, 0.05, 0.005) - stand_urdf = 'stacking/stand.urdf' - stands = [] - for i in range(4): - stand_pose = self.get_random_pose(env, stand_size) - env.add_object(stand_urdf, stand_pose, color=colors[i], category='fixed') - stands.append(stand_pose) - - # Goal: each cylinder is on a stand of the same color. - for i in range(4): - self.add_goal(objs=[cylinders[i]], matches=np.ones((1, 1)), targ_poses=[stands[i]], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1 / 4, - language_goal=self.lang_template.format(color=color_names[i])) \ No newline at end of file diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/mask/mask_target.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/mask/mask_target.py deleted file mode 100644 index 15d26a88bbf3710bd92813335918407db8c4e053..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/mask/mask_target.py +++ /dev/null @@ -1,122 +0,0 @@ -import numpy as np -import torch -from torch.nn.modules.utils import _pair - - -def mask_target(pos_proposals_list, pos_assigned_gt_inds_list, gt_masks_list, - cfg): - """Compute mask target for positive proposals in multiple images. - - Args: - pos_proposals_list (list[Tensor]): Positive proposals in multiple - images. - pos_assigned_gt_inds_list (list[Tensor]): Assigned GT indices for each - positive proposals. - gt_masks_list (list[:obj:`BaseInstanceMasks`]): Ground truth masks of - each image. - cfg (dict): Config dict that specifies the mask size. - - Returns: - list[Tensor]: Mask target of each image. - - Example: - >>> import mmcv - >>> import mmdet - >>> from mmdet.core.mask import BitmapMasks - >>> from mmdet.core.mask.mask_target import * - >>> H, W = 17, 18 - >>> cfg = mmcv.Config({'mask_size': (13, 14)}) - >>> rng = np.random.RandomState(0) - >>> # Positive proposals (tl_x, tl_y, br_x, br_y) for each image - >>> pos_proposals_list = [ - >>> torch.Tensor([ - >>> [ 7.2425, 5.5929, 13.9414, 14.9541], - >>> [ 7.3241, 3.6170, 16.3850, 15.3102], - >>> ]), - >>> torch.Tensor([ - >>> [ 4.8448, 6.4010, 7.0314, 9.7681], - >>> [ 5.9790, 2.6989, 7.4416, 4.8580], - >>> [ 0.0000, 0.0000, 0.1398, 9.8232], - >>> ]), - >>> ] - >>> # Corresponding class index for each proposal for each image - >>> pos_assigned_gt_inds_list = [ - >>> torch.LongTensor([7, 0]), - >>> torch.LongTensor([5, 4, 1]), - >>> ] - >>> # Ground truth mask for each true object for each image - >>> gt_masks_list = [ - >>> BitmapMasks(rng.rand(8, H, W), height=H, width=W), - >>> BitmapMasks(rng.rand(6, H, W), height=H, width=W), - >>> ] - >>> mask_targets = mask_target( - >>> pos_proposals_list, pos_assigned_gt_inds_list, - >>> gt_masks_list, cfg) - >>> assert mask_targets.shape == (5,) + cfg['mask_size'] - """ - cfg_list = [cfg for _ in range(len(pos_proposals_list))] - mask_targets = map(mask_target_single, pos_proposals_list, - pos_assigned_gt_inds_list, gt_masks_list, cfg_list) - mask_targets = list(mask_targets) - if len(mask_targets) > 0: - mask_targets = torch.cat(mask_targets) - return mask_targets - - -def mask_target_single(pos_proposals, pos_assigned_gt_inds, gt_masks, cfg): - """Compute mask target for each positive proposal in the image. - - Args: - pos_proposals (Tensor): Positive proposals. - pos_assigned_gt_inds (Tensor): Assigned GT inds of positive proposals. - gt_masks (:obj:`BaseInstanceMasks`): GT masks in the format of Bitmap - or Polygon. - cfg (dict): Config dict that indicate the mask size. - - Returns: - Tensor: Mask target of each positive proposals in the image. - - Example: - >>> import mmcv - >>> import mmdet - >>> from mmdet.core.mask import BitmapMasks - >>> from mmdet.core.mask.mask_target import * # NOQA - >>> H, W = 32, 32 - >>> cfg = mmcv.Config({'mask_size': (7, 11)}) - >>> rng = np.random.RandomState(0) - >>> # Masks for each ground truth box (relative to the image) - >>> gt_masks_data = rng.rand(3, H, W) - >>> gt_masks = BitmapMasks(gt_masks_data, height=H, width=W) - >>> # Predicted positive boxes in one image - >>> pos_proposals = torch.FloatTensor([ - >>> [ 16.2, 5.5, 19.9, 20.9], - >>> [ 17.3, 13.6, 19.3, 19.3], - >>> [ 14.8, 16.4, 17.0, 23.7], - >>> [ 0.0, 0.0, 16.0, 16.0], - >>> [ 4.0, 0.0, 20.0, 16.0], - >>> ]) - >>> # For each predicted proposal, its assignment to a gt mask - >>> pos_assigned_gt_inds = torch.LongTensor([0, 1, 2, 1, 1]) - >>> mask_targets = mask_target_single( - >>> pos_proposals, pos_assigned_gt_inds, gt_masks, cfg) - >>> assert mask_targets.shape == (5,) + cfg['mask_size'] - """ - device = pos_proposals.device - mask_size = _pair(cfg.mask_size) - num_pos = pos_proposals.size(0) - if num_pos > 0: - proposals_np = pos_proposals.cpu().numpy() - maxh, maxw = gt_masks.height, gt_masks.width - proposals_np[:, [0, 2]] = np.clip(proposals_np[:, [0, 2]], 0, maxw) - proposals_np[:, [1, 3]] = np.clip(proposals_np[:, [1, 3]], 0, maxh) - pos_assigned_gt_inds = pos_assigned_gt_inds.cpu().numpy() - - mask_targets = gt_masks.crop_and_resize( - proposals_np, mask_size, device=device, - inds=pos_assigned_gt_inds).to_ndarray() - - mask_targets = torch.from_numpy(mask_targets).float().to(device) - else: - mask_targets = pos_proposals.new_zeros((0, ) + mask_size) - - return mask_targets diff --git a/spaces/Hakim571/Food-Classification/app.py b/spaces/Hakim571/Food-Classification/app.py deleted file mode 100644 index 25892462aa1a926e359404b844c0ab0d8c36ba41..0000000000000000000000000000000000000000 --- a/spaces/Hakim571/Food-Classification/app.py +++ /dev/null @@ -1,22 +0,0 @@ -import tensorflow as tf -from tensorflow.keras.utils import load_img, img_to_array -import numpy as np -import gradio as gr - -class_names=['Ayam Goreng','Bakso','Bubur Ayam','Ikan Lele Goreng','Mi Goreng','Nasi','Sate','Soto','Telur dadar','Telur mata sapi','Ikan mujahir goreng','Lontong','Pempek telur','Singkong Goreng','Tempe kedelai murni, goreng'] - -model=tf.keras.models.load_model('./my_model') - -def import_and_predict(image_data): - x = image_data.reshape((-1, 224, 224, 3)) - x = tf.keras.applications.imagenet_utils.preprocess_input(x, mode="tf") - prediction = model.predict(x) - labels=class_names - confidences = {labels[i]: float(prediction[0][i]) for i in range(15)} - return confidences -#test -gr.Interface(fn=import_and_predict, - inputs=gr.inputs.Image(shape=(224, 224)), - outputs=gr.outputs.Label(num_top_classes=3), - cache_examples=False, - examples=["Bakso.jpeg", "Sate.jpeg"]).launch() \ No newline at end of file diff --git a/spaces/HaloMaster/chinesesummary/fengshen/models/longformer/modeling_longformer.py b/spaces/HaloMaster/chinesesummary/fengshen/models/longformer/modeling_longformer.py deleted file mode 100644 index 697782a467a212926bba68e8a6791545f3c9f6e2..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/models/longformer/modeling_longformer.py +++ /dev/null @@ -1,2485 +0,0 @@ -# coding=utf-8 -# Copyright 2020 The Allen Institute for AI team and The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""PyTorch Longformer model. """ - -import math -from dataclasses import dataclass -from typing import Optional, Tuple -from numpy.lib.function_base import kaiser - -import torch -import torch.utils.checkpoint -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from transformers.activations import ACT2FN, gelu -from transformers.file_utils import ( - ModelOutput, - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - replace_return_docstrings, -) -from transformers.modeling_utils import ( - PreTrainedModel, - apply_chunking_to_forward, - find_pruneable_heads_and_indices, - prune_linear_layer, -) -from transformers.utils import logging -from transformers import LongformerConfig - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "allenai/longformer-base-4096" -_CONFIG_FOR_DOC = "LongformerConfig" -_TOKENIZER_FOR_DOC = "LongformerTokenizer" - -LONGFORMER_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "allenai/longformer-base-4096", - "allenai/longformer-large-4096", - "allenai/longformer-large-4096-finetuned-triviaqa", - "allenai/longformer-base-4096-extra.pos.embd.only", - "allenai/longformer-large-4096-extra.pos.embd.only", - # See all Longformer models at https://huggingface.co/models?filter=longformer -] - - -@dataclass -class LongformerBaseModelOutput(ModelOutput): - """ - Base class for Longformer's outputs, with potential hidden states, local and global attentions. - - Args: - last_hidden_state (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): - Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) - of shape :obj:`(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): - Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, - sequence_length, x + attention_window + 1)`, where ``x`` is the number of tokens with global attention - mask. - - Local attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token in the sequence to every token with - global attention (first ``x`` values) and to every token in the attention window (remaining - ``attention_window + 1`` values). Note that the first ``x`` values refer to tokens with fixed positions in - the text, but the remaining ``attention_window + 1`` values refer to tokens with relative positions: the - attention weight of a token to itself is located at index ``x + attention_window / 2`` and the - ``attention_window / 2`` preceding (succeeding) values are the attention weights to the ``attention_window - / 2`` preceding (succeeding) tokens. If the attention window contains a token with global attention, the - attention weight at the corresponding index is set to 0; the value should be accessed from the first ``x`` - attention weights. If a token has global attention, the attention weights to all other tokens in - :obj:`attentions` is set to 0, the values should be accessed from :obj:`global_attentions`. - global_attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): - Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, - sequence_length, x)`, where ``x`` is the number of tokens with global attention mask. - - Global attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token with global attention to every token - in the sequence. - """ - - last_hidden_state: torch.FloatTensor - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - global_attentions: Optional[Tuple[torch.FloatTensor]] = None - - -@dataclass -class LongformerBaseModelOutputWithPooling(ModelOutput): - """ - Base class for Longformer's outputs that also contains a pooling of the last hidden states. - - Args: - last_hidden_state (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`): - Sequence of hidden-states at the output of the last layer of the model. - pooler_output (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, hidden_size)`): - Last layer hidden-state of the first token of the sequence (classification token) further processed by a - Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence - prediction (classification) objective during pretraining. - hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): - Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) - of shape :obj:`(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): - Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, - sequence_length, x + attention_window + 1)`, where ``x`` is the number of tokens with global attention - mask. - - Local attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token in the sequence to every token with - global attention (first ``x`` values) and to every token in the attention window (remaining - ``attention_window + 1`` values). Note that the first ``x`` values refer to tokens with fixed positions in - the text, but the remaining ``attention_window + 1`` values refer to tokens with relative positions: the - attention weight of a token to itself is located at index ``x + attention_window / 2`` and the - ``attention_window / 2`` preceding (succeeding) values are the attention weights to the ``attention_window - / 2`` preceding (succeeding) tokens. If the attention window contains a token with global attention, the - attention weight at the corresponding index is set to 0; the value should be accessed from the first ``x`` - attention weights. If a token has global attention, the attention weights to all other tokens in - :obj:`attentions` is set to 0, the values should be accessed from :obj:`global_attentions`. - global_attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): - Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, - sequence_length, x)`, where ``x`` is the number of tokens with global attention mask. - - Global attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token with global attention to every token - in the sequence. - """ - - last_hidden_state: torch.FloatTensor - pooler_output: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - global_attentions: Optional[Tuple[torch.FloatTensor]] = None - - -@dataclass -class LongformerMaskedLMOutput(ModelOutput): - """ - Base class for masked language models outputs. - - Args: - loss (:obj:`torch.FloatTensor` of shape :obj:`(1,)`, `optional`, returned when :obj:`labels` is provided): - Masked language modeling (MLM) loss. - logits (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): - Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) - of shape :obj:`(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): - Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, - sequence_length, x + attention_window + 1)`, where ``x`` is the number of tokens with global attention - mask. - - Local attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token in the sequence to every token with - global attention (first ``x`` values) and to every token in the attention window (remaining - ``attention_window + 1`` values). Note that the first ``x`` values refer to tokens with fixed positions in - the text, but the remaining ``attention_window + 1`` values refer to tokens with relative positions: the - attention weight of a token to itself is located at index ``x + attention_window / 2`` and the - ``attention_window / 2`` preceding (succeeding) values are the attention weights to the ``attention_window - / 2`` preceding (succeeding) tokens. If the attention window contains a token with global attention, the - attention weight at the corresponding index is set to 0; the value should be accessed from the first ``x`` - attention weights. If a token has global attention, the attention weights to all other tokens in - :obj:`attentions` is set to 0, the values should be accessed from :obj:`global_attentions`. - global_attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): - Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, - sequence_length, x)`, where ``x`` is the number of tokens with global attention mask. - - Global attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token with global attention to every token - in the sequence. - """ - - loss: Optional[torch.FloatTensor] = None - logits: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - global_attentions: Optional[Tuple[torch.FloatTensor]] = None - - -@dataclass -class LongformerQuestionAnsweringModelOutput(ModelOutput): - """ - Base class for outputs of question answering Longformer models. - - Args: - loss (:obj:`torch.FloatTensor` of shape :obj:`(1,)`, `optional`, returned when :obj:`labels` is provided): - Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. - start_logits (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`): - Span-start scores (before SoftMax). - end_logits (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`): - Span-end scores (before SoftMax). - hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): - Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) - of shape :obj:`(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): - Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, - sequence_length, x + attention_window + 1)`, where ``x`` is the number of tokens with global attention - mask. - - Local attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token in the sequence to every token with - global attention (first ``x`` values) and to every token in the attention window (remaining - ``attention_window + 1`` values). Note that the first ``x`` values refer to tokens with fixed positions in - the text, but the remaining ``attention_window + 1`` values refer to tokens with relative positions: the - attention weight of a token to itself is located at index ``x + attention_window / 2`` and the - ``attention_window / 2`` preceding (succeeding) values are the attention weights to the ``attention_window - / 2`` preceding (succeeding) tokens. If the attention window contains a token with global attention, the - attention weight at the corresponding index is set to 0; the value should be accessed from the first ``x`` - attention weights. If a token has global attention, the attention weights to all other tokens in - :obj:`attentions` is set to 0, the values should be accessed from :obj:`global_attentions`. - global_attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): - Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, - sequence_length, x)`, where ``x`` is the number of tokens with global attention mask. - - Global attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token with global attention to every token - in the sequence. - """ - - loss: Optional[torch.FloatTensor] = None - start_logits: torch.FloatTensor = None - end_logits: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - global_attentions: Optional[Tuple[torch.FloatTensor]] = None - - -@dataclass -class LongformerSequenceClassifierOutput(ModelOutput): - """ - Base class for outputs of sentence classification models. - - Args: - loss (:obj:`torch.FloatTensor` of shape :obj:`(1,)`, `optional`, returned when :obj:`labels` is provided): - Classification (or regression if config.num_labels==1) loss. - logits (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, config.num_labels)`): - Classification (or regression if config.num_labels==1) scores (before SoftMax). - hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): - Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) - of shape :obj:`(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): - Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, - sequence_length, x + attention_window + 1)`, where ``x`` is the number of tokens with global attention - mask. - - Local attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token in the sequence to every token with - global attention (first ``x`` values) and to every token in the attention window (remaining - ``attention_window + 1`` values). Note that the first ``x`` values refer to tokens with fixed positions in - the text, but the remaining ``attention_window + 1`` values refer to tokens with relative positions: the - attention weight of a token to itself is located at index ``x + attention_window / 2`` and the - ``attention_window / 2`` preceding (succeeding) values are the attention weights to the ``attention_window - / 2`` preceding (succeeding) tokens. If the attention window contains a token with global attention, the - attention weight at the corresponding index is set to 0; the value should be accessed from the first ``x`` - attention weights. If a token has global attention, the attention weights to all other tokens in - :obj:`attentions` is set to 0, the values should be accessed from :obj:`global_attentions`. - global_attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): - Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, - sequence_length, x)`, where ``x`` is the number of tokens with global attention mask. - - Global attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token with global attention to every token - in the sequence. - """ - - loss: Optional[torch.FloatTensor] = None - logits: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - global_attentions: Optional[Tuple[torch.FloatTensor]] = None - - -@dataclass -class LongformerMultipleChoiceModelOutput(ModelOutput): - """ - Base class for outputs of multiple choice Longformer models. - - Args: - loss (:obj:`torch.FloatTensor` of shape `(1,)`, `optional`, returned when :obj:`labels` is provided): - Classification loss. - logits (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, num_choices)`): - `num_choices` is the second dimension of the input tensors. (see `input_ids` above). - - Classification scores (before SoftMax). - hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): - Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) - of shape :obj:`(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): - Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, - sequence_length, x + attention_window + 1)`, where ``x`` is the number of tokens with global attention - mask. - - Local attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token in the sequence to every token with - global attention (first ``x`` values) and to every token in the attention window (remaining - ``attention_window + 1`` values). Note that the first ``x`` values refer to tokens with fixed positions in - the text, but the remaining ``attention_window + 1`` values refer to tokens with relative positions: the - attention weight of a token to itself is located at index ``x + attention_window / 2`` and the - ``attention_window / 2`` preceding (succeeding) values are the attention weights to the ``attention_window - / 2`` preceding (succeeding) tokens. If the attention window contains a token with global attention, the - attention weight at the corresponding index is set to 0; the value should be accessed from the first ``x`` - attention weights. If a token has global attention, the attention weights to all other tokens in - :obj:`attentions` is set to 0, the values should be accessed from :obj:`global_attentions`. - global_attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): - Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, - sequence_length, x)`, where ``x`` is the number of tokens with global attention mask. - - Global attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token with global attention to every token - in the sequence. - """ - - loss: Optional[torch.FloatTensor] = None - logits: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - global_attentions: Optional[Tuple[torch.FloatTensor]] = None - - -@dataclass -class LongformerTokenClassifierOutput(ModelOutput): - """ - Base class for outputs of token classification models. - - Args: - loss (:obj:`torch.FloatTensor` of shape :obj:`(1,)`, `optional`, returned when ``labels`` is provided) : - Classification loss. - logits (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, config.num_labels)`): - Classification scores (before SoftMax). - hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): - Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) - of shape :obj:`(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): - Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, - sequence_length, x + attention_window + 1)`, where ``x`` is the number of tokens with global attention - mask. - - Local attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token in the sequence to every token with - global attention (first ``x`` values) and to every token in the attention window (remaining - ``attention_window + 1`` values). Note that the first ``x`` values refer to tokens with fixed positions in - the text, but the remaining ``attention_window + 1`` values refer to tokens with relative positions: the - attention weight of a token to itself is located at index ``x + attention_window / 2`` and the - ``attention_window / 2`` preceding (succeeding) values are the attention weights to the ``attention_window - / 2`` preceding (succeeding) tokens. If the attention window contains a token with global attention, the - attention weight at the corresponding index is set to 0; the value should be accessed from the first ``x`` - attention weights. If a token has global attention, the attention weights to all other tokens in - :obj:`attentions` is set to 0, the values should be accessed from :obj:`global_attentions`. - global_attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): - Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, - sequence_length, x)`, where ``x`` is the number of tokens with global attention mask. - - Global attentions weights after the attention softmax, used to compute the weighted average in the - self-attention heads. Those are the attention weights from every token with global attention to every token - in the sequence. - """ - - loss: Optional[torch.FloatTensor] = None - logits: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - global_attentions: Optional[Tuple[torch.FloatTensor]] = None - - -def _get_question_end_index(input_ids, sep_token_id): - """ - Computes the index of the first occurrence of `sep_token_id`. - """ - - sep_token_indices = (input_ids == sep_token_id).nonzero() - batch_size = input_ids.shape[0] - - assert sep_token_indices.shape[1] == 2, "`input_ids` should have two dimensions" - assert ( - sep_token_indices.shape[0] == 3 * batch_size - ), f"There should be exactly three separator tokens: {sep_token_id} in every sample for questions answering. You might also consider to set `global_attention_mask` manually in the forward function to avoid this error." - return sep_token_indices.view(batch_size, 3, 2)[:, 0, 1] - - -def _compute_global_attention_mask(input_ids, sep_token_id, before_sep_token=True): - """ - Computes global attention mask by putting attention on all tokens before `sep_token_id` if `before_sep_token is - True` else after `sep_token_id`. - """ - question_end_index = _get_question_end_index(input_ids, sep_token_id) - question_end_index = question_end_index.unsqueeze( - dim=1) # size: batch_size x 1 - # bool attention mask with True in locations of global attention - attention_mask = torch.arange(input_ids.shape[1], device=input_ids.device) - if before_sep_token is True: - attention_mask = (attention_mask.expand_as(input_ids) - < question_end_index).to(torch.uint8) - else: - # last token is separation token and should not be counted and in the middle are two separation tokens - attention_mask = (attention_mask.expand_as(input_ids) > (question_end_index + 1)).to(torch.uint8) * ( - attention_mask.expand_as(input_ids) < input_ids.shape[-1] - ).to(torch.uint8) - - return attention_mask - - -def create_position_ids_from_input_ids(input_ids, padding_idx): - """ - Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols - are ignored. This is modified from fairseq's `utils.make_positions`. - - Args: - x: torch.Tensor x: - - Returns: torch.Tensor - """ - # The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA. - mask = input_ids.ne(padding_idx).int() - incremental_indices = torch.cumsum(mask, dim=1).type_as(mask) * mask - return incremental_indices.long() + padding_idx - - -class LongformerEmbeddings(nn.Module): - """ - Same as BertEmbeddings with a tiny tweak for positional embeddings indexing. - """ - - def __init__(self, config): - super().__init__() - self.word_embeddings = nn.Embedding( - config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id) - self.position_embeddings = nn.Embedding( - config.max_position_embeddings, config.hidden_size) - self.token_type_embeddings = nn.Embedding( - config.type_vocab_size, config.hidden_size) - - # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load - # any TensorFlow checkpoint file - self.LayerNorm = nn.LayerNorm( - config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - # position_ids (1, len position emb) is contiguous in memory and exported when serialized - - # Modify - # self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))) - # self.position_embedding_type = getattr(config, "position_embedding_type", "absolute") - - # self.padding_idx = config.pad_token_id - # self.position_embeddings = nn.Embedding( - # config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx - # ) - - def forward(self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None): - - # if position_ids is None: - # if input_ids is not None: - # # Create the position ids from the input token ids. Any padded tokens remain padded. - # position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx).to(input_ids.device) - # else: - # position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds) - - if input_ids is not None: - input_shape = input_ids.size() - else: - input_shape = inputs_embeds.size()[:-1] - - seq_length = input_shape[1] - - # if position_ids is None: - # position_ids = self.position_ids[:, :seq_length] - - if token_type_ids is None: - token_type_ids = torch.zeros( - input_shape, dtype=torch.long, device=self.position_ids.device) - - if inputs_embeds is None: - inputs_embeds = self.word_embeddings(input_ids) - - # Modify - # position_embeddings = self.position_embeddings(position_ids) - - token_type_embeddings = self.token_type_embeddings(token_type_ids) - - embeddings = inputs_embeds + token_type_embeddings - embeddings = self.LayerNorm(embeddings) - embeddings = self.dropout(embeddings) - return embeddings - - def create_position_ids_from_inputs_embeds(self, inputs_embeds): - """ - We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids. - - Args: - inputs_embeds: torch.Tensor inputs_embeds: - - Returns: torch.Tensor - """ - input_shape = inputs_embeds.size()[:-1] - sequence_length = input_shape[1] - - position_ids = torch.arange( - self.padding_idx + 1, sequence_length + self.padding_idx + 1, dtype=torch.long, device=inputs_embeds.device - ) - return position_ids.unsqueeze(0).expand(input_shape) - - -class RoPEmbedding(nn.Module): - def __init__(self, d_model): - super(RoPEmbedding, self).__init__() - self.d_model = d_model - div_term = torch.exp(torch.arange( - 0, d_model, 2).float() * (-math.log(10000.0) / d_model)) - self.register_buffer('div_term', div_term) - - def forward(self, x, seq_dim=0): - x = x # [seq_len,num_head,batch_size,per_head_hidden_size] - t = torch.arange(x.size(seq_dim), device=x.device).type_as( - self.div_term) - sinusoid_inp = torch.outer(t, self.div_term) - sin, cos = sinusoid_inp.sin(), sinusoid_inp.cos() # [s, hn] - o_shape = (sin.size(0), 1, 1, sin.size(1)) - sin, cos = sin.view(*o_shape), cos.view(*o_shape) # [s, 1, 1, hn] - sin = torch.repeat_interleave(sin, 2, dim=-1) - cos = torch.repeat_interleave(cos, 2, dim=-1) - x2 = torch.stack([-x[..., 1::2], x[..., ::2]], dim=-1).reshape_as(x) - x = cos * x + sin * x2 - return x - - -class LongformerSelfAttention(nn.Module): - def __init__(self, config, layer_id): - super().__init__() - if config.hidden_size % config.num_attention_heads != 0: - raise ValueError( - f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention " - f"heads ({config.num_attention_heads})" - ) - self.config = config - self.num_heads = config.num_attention_heads - self.head_dim = int(config.hidden_size / config.num_attention_heads) - self.embed_dim = config.hidden_size - - self.query = nn.Linear(config.hidden_size, self.embed_dim) - self.key = nn.Linear(config.hidden_size, self.embed_dim) - self.value = nn.Linear(config.hidden_size, self.embed_dim) - - # separate projection layers for tokens with global attention - # self.query_global = nn.Linear(config.hidden_size, self.embed_dim) - # self.key_global = nn.Linear(config.hidden_size, self.embed_dim) - # self.value_global = nn.Linear(config.hidden_size, self.embed_dim) - - self.dropout = config.attention_probs_dropout_prob - - self.layer_id = layer_id - attention_window = config.attention_window[self.layer_id] - assert ( - attention_window % 2 == 0 - ), f"`attention_window` for layer {self.layer_id} has to be an even value. Given {attention_window}" - assert ( - attention_window > 0 - ), f"`attention_window` for layer {self.layer_id} has to be positive. Given {attention_window}" - - self.one_sided_attn_window_size = attention_window // 2 - self.rope_emb = RoPEmbedding(self.head_dim) - - def forward( - self, - hidden_states, - attention_mask=None, - layer_head_mask=None, - is_index_masked=None, - is_index_global_attn=None, - is_global_attn=None, - output_attentions=False, - ): - """ - :class:`LongformerSelfAttention` expects `len(hidden_states)` to be multiple of `attention_window`. Padding to - `attention_window` happens in :meth:`LongformerModel.forward` to avoid redoing the padding on each layer. - - The `attention_mask` is changed in :meth:`LongformerModel.forward` from 0, 1, 2 to: - - * -10000: no attention - * 0: local attention - * +10000: global attention - """ - - # print(attention_mask.shape) - if not self.config.use_sparse_attention: # 如果不使用稀疏attention,则使用标准的attention - hidden_states = hidden_states.transpose(0, 1) - # project hidden states - query_vectors = self.query(hidden_states) - key_vectors = self.key(hidden_states) - value_vectors = self.value(hidden_states) - - seq_len, batch_size, embed_dim = hidden_states.size() - assert ( - embed_dim == self.embed_dim - ), f"hidden_states should have embed_dim = {self.embed_dim}, but has {embed_dim}" - - # normalize query - - # query_vectors = query_vectors.view(seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1) - # key_vectors = key_vectors.view(seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1) - - # print('query_vectors',query_vectors.shape) - - query_vectors = query_vectors.view( - seq_len, batch_size, self.num_heads, self.head_dim).transpose(1, 2) - key_vectors = key_vectors.view( - seq_len, batch_size, self.num_heads, self.head_dim).transpose(1, 2) - - query_vectors = self.rope_emb(query_vectors) - key_vectors = self.rope_emb(key_vectors) - - query_vectors = query_vectors.transpose(0, 2) # [b,mh,s,hd] - key_vectors = key_vectors.transpose(0, 2).transpose(2, 3) - - # print('query_vectors',query_vectors.shape) - - query_vectors /= math.sqrt(self.head_dim) - - attention_mask = self.get_extended_attention_mask( - attention_mask, attention_mask.shape, attention_mask.device) - attn_scores = torch.matmul( - query_vectors, key_vectors)+attention_mask - - attn_scores = torch.nn.functional.softmax(attn_scores, dim=-1) - - value_vectors = value_vectors.view( - seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1).transpose(1, 2) - outputs = torch.matmul(attn_scores, value_vectors).transpose( - 1, 2).contiguous().view(batch_size, seq_len, self.num_heads*self.head_dim) - - # print('output',outputs.shape) - outputs = (outputs,) - return outputs+(attn_scores,) - - # print('hidden.shape',hidden_states.shape) - # print('attention_mask.shape',attention_mask.shape) - # print('att_mask:',attention_mask) - - hidden_states = hidden_states.transpose(0, 1) - - # project hidden states - query_vectors = self.query(hidden_states) - key_vectors = self.key(hidden_states) - value_vectors = self.value(hidden_states) - - seq_len, batch_size, embed_dim = hidden_states.size() - assert ( - embed_dim == self.embed_dim - ), f"hidden_states should have embed_dim = {self.embed_dim}, but has {embed_dim}" - - # normalize query - - # query_vectors = query_vectors.view(seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1) - # key_vectors = key_vectors.view(seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1) - - query_vectors = query_vectors.view( - seq_len, batch_size, self.num_heads, self.head_dim).transpose(1, 2) - key_vectors = key_vectors.view( - seq_len, batch_size, self.num_heads, self.head_dim).transpose(1, 2) - - query_vectors = self.rope_emb(query_vectors) - key_vectors = self.rope_emb(key_vectors) - - query_vectors = query_vectors.transpose(1, 2).transpose(0, 1) - key_vectors = key_vectors.transpose(1, 2).transpose(0, 1) - - query_vectors /= math.sqrt(self.head_dim) - - attn_scores = self._sliding_chunks_query_key_matmul( - query_vectors, key_vectors, self.one_sided_attn_window_size - ) - # print('att:',attn_scores.shape) - # values to pad for attention probs - remove_from_windowed_attention_mask = ( - attention_mask != 0)[:, :, None, None] - - # cast to fp32/fp16 then replace 1's with -inf - float_mask = remove_from_windowed_attention_mask.type_as(query_vectors).masked_fill( - remove_from_windowed_attention_mask, -10000.0 - ) - # diagonal mask with zeros everywhere and -inf inplace of padding - diagonal_mask = self._sliding_chunks_query_key_matmul( - float_mask.new_ones(size=float_mask.size() - ), float_mask, self.one_sided_attn_window_size - ) - - # pad local attention probs - attn_scores += diagonal_mask - - assert list(attn_scores.size()) == [ - batch_size, - seq_len, - self.num_heads, - self.one_sided_attn_window_size * 2 + 1, - ], f"local_attn_probs should be of size ({batch_size}, {seq_len}, {self.num_heads}, {self.one_sided_attn_window_size * 2 + 1}), but is of size {attn_scores.size()}" - - # compute local attention probs from global attention keys and contact over window dim - if is_global_attn: - # compute global attn indices required through out forward fn - ( - max_num_global_attn_indices, - is_index_global_attn_nonzero, - is_local_index_global_attn_nonzero, - is_local_index_no_global_attn_nonzero, - ) = self._get_global_attn_indices(is_index_global_attn) - # calculate global attn probs from global key - - global_key_attn_scores = self._concat_with_global_key_attn_probs( - query_vectors=query_vectors, - key_vectors=key_vectors, - max_num_global_attn_indices=max_num_global_attn_indices, - is_index_global_attn_nonzero=is_index_global_attn_nonzero, - is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero, - is_local_index_no_global_attn_nonzero=is_local_index_no_global_attn_nonzero, - ) - # concat to local_attn_probs - # (batch_size, seq_len, num_heads, extra attention count + 2*window+1) - attn_scores = torch.cat( - (global_key_attn_scores, attn_scores), dim=-1) - - # free memory - del global_key_attn_scores - - attn_probs = nn.functional.softmax( - attn_scores, dim=-1, dtype=torch.float32 - ) # use fp32 for numerical stability - - if layer_head_mask is not None: - assert layer_head_mask.size() == ( - self.num_heads, - ), f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}" - attn_probs = layer_head_mask.view(1, 1, -1, 1) * attn_probs - - # softmax sometimes inserts NaN if all positions are masked, replace them with 0 - attn_probs = torch.masked_fill( - attn_probs, is_index_masked[:, :, None, None], 0.0) - attn_probs = attn_probs.type_as(attn_scores) - - # free memory - del attn_scores - - # apply dropout - attn_probs = nn.functional.dropout( - attn_probs, p=self.dropout, training=self.training) - - value_vectors = value_vectors.view( - seq_len, batch_size, self.num_heads, self.head_dim).transpose(0, 1) - - # compute local attention output with global attention value and add - if is_global_attn: - # compute sum of global and local attn - attn_output = self._compute_attn_output_with_global_indices( - value_vectors=value_vectors, - attn_probs=attn_probs, - max_num_global_attn_indices=max_num_global_attn_indices, - is_index_global_attn_nonzero=is_index_global_attn_nonzero, - is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero, - ) - else: - # compute local attn only - attn_output = self._sliding_chunks_matmul_attn_probs_value( - attn_probs, value_vectors, self.one_sided_attn_window_size - ) - - assert attn_output.size() == (batch_size, seq_len, self.num_heads, - self.head_dim), "Unexpected size" - attn_output = attn_output.transpose(0, 1).reshape( - seq_len, batch_size, embed_dim).contiguous() - - # compute value for global attention and overwrite to attention output - # TODO: remove the redundant computation - if is_global_attn: - global_attn_output, global_attn_probs = self._compute_global_attn_output_from_hidden( - global_query_vectors=query_vectors, - global_key_vectors=key_vectors, - global_value_vectors=value_vectors, - max_num_global_attn_indices=max_num_global_attn_indices, - layer_head_mask=layer_head_mask, - is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero, - is_index_global_attn_nonzero=is_index_global_attn_nonzero, - is_local_index_no_global_attn_nonzero=is_local_index_no_global_attn_nonzero, - is_index_masked=is_index_masked, - ) - # print('global_attn_output',global_attn_output.shape) - # get only non zero global attn output - nonzero_global_attn_output = global_attn_output[ - is_local_index_global_attn_nonzero[0], :, is_local_index_global_attn_nonzero[1] - ] - # print('nonzero_global_attn_output',nonzero_global_attn_output.shape) - # overwrite values with global attention - attn_output[is_index_global_attn_nonzero[::-1]] = nonzero_global_attn_output.view( - len(is_local_index_global_attn_nonzero[0]), -1 - ) - # The attention weights for tokens with global attention are - # just filler values, they were never used to compute the output. - # Fill with 0 now, the correct values are in 'global_attn_probs'. - attn_probs[is_index_global_attn_nonzero] = 0 - - outputs = (attn_output.transpose(0, 1),) - - if output_attentions: - outputs += (attn_probs,) - - return outputs + (global_attn_probs,) if (is_global_attn and output_attentions) else outputs - - @staticmethod - def _pad_and_transpose_last_two_dims(hidden_states_padded, padding): - """pads rows and then flips rows and columns""" - hidden_states_padded = nn.functional.pad( - hidden_states_padded, padding - ) # padding value is not important because it will be overwritten - hidden_states_padded = hidden_states_padded.view( - *hidden_states_padded.size()[:-2], hidden_states_padded.size(-1), hidden_states_padded.size(-2) - ) - return hidden_states_padded - - @staticmethod - def _pad_and_diagonalize(chunked_hidden_states): - """ - shift every row 1 step right, converting columns into diagonals. - - Example:: - - chunked_hidden_states: [ 0.4983, 2.6918, -0.0071, 1.0492, - -1.8348, 0.7672, 0.2986, 0.0285, - -0.7584, 0.4206, -0.0405, 0.1599, - 2.0514, -1.1600, 0.5372, 0.2629 ] - window_overlap = num_rows = 4 - (pad & diagonalize) => - [ 0.4983, 2.6918, -0.0071, 1.0492, 0.0000, 0.0000, 0.0000 - 0.0000, -1.8348, 0.7672, 0.2986, 0.0285, 0.0000, 0.0000 - 0.0000, 0.0000, -0.7584, 0.4206, -0.0405, 0.1599, 0.0000 - 0.0000, 0.0000, 0.0000, 2.0514, -1.1600, 0.5372, 0.2629 ] - """ - total_num_heads, num_chunks, window_overlap, hidden_dim = chunked_hidden_states.size() - chunked_hidden_states = nn.functional.pad( - chunked_hidden_states, (0, window_overlap + 1) - ) # total_num_heads x num_chunks x window_overlap x (hidden_dim+window_overlap+1). Padding value is not important because it'll be overwritten - chunked_hidden_states = chunked_hidden_states.view( - total_num_heads, num_chunks, -1 - ) # total_num_heads x num_chunks x window_overlap*window_overlap+window_overlap - chunked_hidden_states = chunked_hidden_states[ - :, :, :-window_overlap - ] # total_num_heads x num_chunks x window_overlap*window_overlap - chunked_hidden_states = chunked_hidden_states.view( - total_num_heads, num_chunks, window_overlap, window_overlap + hidden_dim - ) - chunked_hidden_states = chunked_hidden_states[:, :, :, :-1] - return chunked_hidden_states - - @staticmethod - def _chunk(hidden_states, window_overlap): - """convert into overlapping chunks. Chunk size = 2w, overlap size = w""" - - # non-overlapping chunks of size = 2w - hidden_states = hidden_states.view( - hidden_states.size(0), - hidden_states.size(1) // (window_overlap * 2), - window_overlap * 2, - hidden_states.size(2), - ) - - # use `as_strided` to make the chunks overlap with an overlap size = window_overlap - chunk_size = list(hidden_states.size()) - chunk_size[1] = chunk_size[1] * 2 - 1 - - chunk_stride = list(hidden_states.stride()) - chunk_stride[1] = chunk_stride[1] // 2 - return hidden_states.as_strided(size=chunk_size, stride=chunk_stride) - - @staticmethod - def _mask_invalid_locations(input_tensor, affected_seq_len) -> torch.Tensor: - beginning_mask_2d = input_tensor.new_ones( - affected_seq_len, affected_seq_len + 1).tril().flip(dims=[0]) - beginning_mask = beginning_mask_2d[None, :, None, :] - ending_mask = beginning_mask.flip(dims=(1, 3)) - beginning_input = input_tensor[:, - :affected_seq_len, :, : affected_seq_len + 1] - beginning_mask = beginning_mask.expand(beginning_input.size()) - # `== 1` converts to bool or uint8 - beginning_input.masked_fill_(beginning_mask == 1, -float("inf")) - ending_input = input_tensor[:, - - affected_seq_len:, :, -(affected_seq_len + 1):] - ending_mask = ending_mask.expand(ending_input.size()) - # `== 1` converts to bool or uint8 - ending_input.masked_fill_(ending_mask == 1, -float("inf")) - - def _sliding_chunks_query_key_matmul(self, query: torch.Tensor, key: torch.Tensor, window_overlap: int): - """ - Matrix multiplication of query and key tensors using with a sliding window attention pattern. This - implementation splits the input into overlapping chunks of size 2w (e.g. 512 for pretrained Longformer) with an - overlap of size window_overlap - """ - batch_size, seq_len, num_heads, head_dim = query.size() - assert ( - seq_len % (window_overlap * 2) == 0 - ), f"Sequence length should be multiple of {window_overlap * 2}. Given {seq_len}" - assert query.size() == key.size() - - chunks_count = seq_len // window_overlap - 1 - - # group batch_size and num_heads dimensions into one, then chunk seq_len into chunks of size window_overlap * 2 - query = query.transpose(1, 2).reshape( - batch_size * num_heads, seq_len, head_dim) - key = key.transpose(1, 2).reshape( - batch_size * num_heads, seq_len, head_dim) - - query = self._chunk(query, window_overlap) - key = self._chunk(key, window_overlap) - - # matrix multiplication - # bcxd: batch_size * num_heads x chunks x 2window_overlap x head_dim - # bcyd: batch_size * num_heads x chunks x 2window_overlap x head_dim - # bcxy: batch_size * num_heads x chunks x 2window_overlap x 2window_overlap - diagonal_chunked_attention_scores = torch.einsum( - "bcxd,bcyd->bcxy", (query, key)) # multiply - - # convert diagonals into columns - diagonal_chunked_attention_scores = self._pad_and_transpose_last_two_dims( - diagonal_chunked_attention_scores, padding=(0, 0, 0, 1) - ) - - # allocate space for the overall attention matrix where the chunks are combined. The last dimension - # has (window_overlap * 2 + 1) columns. The first (window_overlap) columns are the window_overlap lower triangles (attention from a word to - # window_overlap previous words). The following column is attention score from each word to itself, then - # followed by window_overlap columns for the upper triangle. - - diagonal_attention_scores = diagonal_chunked_attention_scores.new_empty( - (batch_size * num_heads, chunks_count + 1, - window_overlap, window_overlap * 2 + 1) - ) - - # copy parts from diagonal_chunked_attention_scores into the combined matrix of attentions - # - copying the main diagonal and the upper triangle - diagonal_attention_scores[:, :-1, :, window_overlap:] = diagonal_chunked_attention_scores[ - :, :, :window_overlap, : window_overlap + 1 - ] - diagonal_attention_scores[:, -1, :, window_overlap:] = diagonal_chunked_attention_scores[ - :, -1, window_overlap:, : window_overlap + 1 - ] - # - copying the lower triangle - diagonal_attention_scores[:, 1:, :, :window_overlap] = diagonal_chunked_attention_scores[ - :, :, -(window_overlap + 1): -1, window_overlap + 1: - ] - - diagonal_attention_scores[:, 0, 1:window_overlap, 1:window_overlap] = diagonal_chunked_attention_scores[ - :, 0, : window_overlap - 1, 1 - window_overlap: - ] - - # separate batch_size and num_heads dimensions again - diagonal_attention_scores = diagonal_attention_scores.view( - batch_size, num_heads, seq_len, 2 * window_overlap + 1 - ).transpose(2, 1) - - self._mask_invalid_locations(diagonal_attention_scores, window_overlap) - return diagonal_attention_scores - - def _sliding_chunks_matmul_attn_probs_value( - self, attn_probs: torch.Tensor, value: torch.Tensor, window_overlap: int - ): - """ - Same as _sliding_chunks_query_key_matmul but for attn_probs and value tensors. Returned tensor will be of the - same shape as `attn_probs` - """ - batch_size, seq_len, num_heads, head_dim = value.size() - - assert seq_len % (window_overlap * 2) == 0 - assert attn_probs.size()[:3] == value.size()[:3] - assert attn_probs.size(3) == 2 * window_overlap + 1 - chunks_count = seq_len // window_overlap - 1 - # group batch_size and num_heads dimensions into one, then chunk seq_len into chunks of size 2 window overlap - - chunked_attn_probs = attn_probs.transpose(1, 2).reshape( - batch_size * num_heads, seq_len // window_overlap, window_overlap, 2 * window_overlap + 1 - ) - - # group batch_size and num_heads dimensions into one - value = value.transpose(1, 2).reshape( - batch_size * num_heads, seq_len, head_dim) - - # pad seq_len with w at the beginning of the sequence and another window overlap at the end - padded_value = nn.functional.pad( - value, (0, 0, window_overlap, window_overlap), value=-1) - - # chunk padded_value into chunks of size 3 window overlap and an overlap of size window overlap - chunked_value_size = (batch_size * num_heads, - chunks_count + 1, 3 * window_overlap, head_dim) - chunked_value_stride = padded_value.stride() - chunked_value_stride = ( - chunked_value_stride[0], - window_overlap * chunked_value_stride[1], - chunked_value_stride[1], - chunked_value_stride[2], - ) - chunked_value = padded_value.as_strided( - size=chunked_value_size, stride=chunked_value_stride) - - chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) - - context = torch.einsum( - "bcwd,bcdh->bcwh", (chunked_attn_probs, chunked_value)) - return context.view(batch_size, num_heads, seq_len, head_dim).transpose(1, 2) - - @staticmethod - def _get_global_attn_indices(is_index_global_attn): - """compute global attn indices required throughout forward pass""" - # helper variable - num_global_attn_indices = is_index_global_attn.long().sum(dim=1) - - # max number of global attn indices in batch - max_num_global_attn_indices = num_global_attn_indices.max() - - # indices of global attn - is_index_global_attn_nonzero = is_index_global_attn.nonzero( - as_tuple=True) - - # helper variable - is_local_index_global_attn = torch.arange( - max_num_global_attn_indices, device=is_index_global_attn.device - ) < num_global_attn_indices.unsqueeze(dim=-1) - - # location of the non-padding values within global attention indices - is_local_index_global_attn_nonzero = is_local_index_global_attn.nonzero( - as_tuple=True) - - # location of the padding values within global attention indices - is_local_index_no_global_attn_nonzero = ( - is_local_index_global_attn == 0).nonzero(as_tuple=True) - return ( - max_num_global_attn_indices, - is_index_global_attn_nonzero, - is_local_index_global_attn_nonzero, - is_local_index_no_global_attn_nonzero, - ) - - def _concat_with_global_key_attn_probs( - self, - key_vectors, - query_vectors, - max_num_global_attn_indices, - is_index_global_attn_nonzero, - is_local_index_global_attn_nonzero, - is_local_index_no_global_attn_nonzero, - ): - batch_size = key_vectors.shape[0] - - # create only global key vectors - key_vectors_only_global = key_vectors.new_zeros( - batch_size, max_num_global_attn_indices, self.num_heads, self.head_dim - ) - - key_vectors_only_global[is_local_index_global_attn_nonzero] = key_vectors[is_index_global_attn_nonzero] - - # (batch_size, seq_len, num_heads, max_num_global_attn_indices) - attn_probs_from_global_key = torch.einsum( - "blhd,bshd->blhs", (query_vectors, key_vectors_only_global)) - - attn_probs_from_global_key[ - is_local_index_no_global_attn_nonzero[0], :, :, is_local_index_no_global_attn_nonzero[1] - ] = -10000.0 - - return attn_probs_from_global_key - - def _compute_attn_output_with_global_indices( - self, - value_vectors, - attn_probs, - max_num_global_attn_indices, - is_index_global_attn_nonzero, - is_local_index_global_attn_nonzero, - ): - batch_size = attn_probs.shape[0] - - # cut local attn probs to global only - attn_probs_only_global = attn_probs.narrow( - -1, 0, max_num_global_attn_indices) - # get value vectors for global only - value_vectors_only_global = value_vectors.new_zeros( - batch_size, max_num_global_attn_indices, self.num_heads, self.head_dim - ) - value_vectors_only_global[is_local_index_global_attn_nonzero] = value_vectors[is_index_global_attn_nonzero] - - # use `matmul` because `einsum` crashes sometimes with fp16 - # attn = torch.einsum('blhs,bshd->blhd', (selected_attn_probs, selected_v)) - # compute attn output only global - attn_output_only_global = torch.matmul( - attn_probs_only_global.transpose( - 1, 2), value_vectors_only_global.transpose(1, 2) - ).transpose(1, 2) - - # reshape attn probs - attn_probs_without_global = attn_probs.narrow( - -1, max_num_global_attn_indices, attn_probs.size(-1) - max_num_global_attn_indices - ).contiguous() - - # compute attn output with global - attn_output_without_global = self._sliding_chunks_matmul_attn_probs_value( - attn_probs_without_global, value_vectors, self.one_sided_attn_window_size - ) - return attn_output_only_global + attn_output_without_global - - def _compute_global_attn_output_from_hidden( - self, - global_query_vectors, - global_key_vectors, - global_value_vectors, - max_num_global_attn_indices, - layer_head_mask, - is_local_index_global_attn_nonzero, - is_index_global_attn_nonzero, - is_local_index_no_global_attn_nonzero, - is_index_masked, - ): - - global_query_vectors = global_query_vectors.transpose(0, 1) - seq_len, batch_size, _, _ = global_query_vectors.shape - global_query_vectors_only_global = global_query_vectors.new_zeros( - max_num_global_attn_indices, batch_size, self.num_heads, self.head_dim) - global_query_vectors_only_global[is_local_index_global_attn_nonzero[::-1]] = global_query_vectors[ - is_index_global_attn_nonzero[::-1] - ] - - seq_len_q, batch_size_q, _, _ = global_query_vectors_only_global.shape - - # print('global_query_vectors_only_global',global_query_vectors_only_global.shape) - - global_query_vectors_only_global = global_query_vectors_only_global.view( - seq_len_q, batch_size_q, self.num_heads, self.head_dim) - global_key_vectors = global_key_vectors.transpose(0, 1) - global_value_vectors = global_value_vectors.transpose(0, 1) - - # reshape - global_query_vectors_only_global = ( - global_query_vectors_only_global.contiguous() - .view(max_num_global_attn_indices, batch_size * self.num_heads, self.head_dim) - .transpose(0, 1) - ) # (batch_size * self.num_heads, max_num_global_attn_indices, head_dim) - global_key_vectors = ( - global_key_vectors.contiguous().view(-1, batch_size * self.num_heads, - self.head_dim).transpose(0, 1) - ) # batch_size * self.num_heads, seq_len, head_dim) - global_value_vectors = ( - global_value_vectors.contiguous().view(-1, batch_size * self.num_heads, - self.head_dim).transpose(0, 1) - ) # batch_size * self.num_heads, seq_len, head_dim) - - # compute attn scores - - global_attn_scores = torch.bmm( - global_query_vectors_only_global, global_key_vectors.transpose(1, 2)) - - assert list(global_attn_scores.size()) == [ - batch_size * self.num_heads, - max_num_global_attn_indices, - seq_len, - ], f"global_attn_scores have the wrong size. Size should be {(batch_size * self.num_heads, max_num_global_attn_indices, seq_len)}, but is {global_attn_scores.size()}." - - global_attn_scores = global_attn_scores.view( - batch_size, self.num_heads, max_num_global_attn_indices, seq_len) - - global_attn_scores[ - is_local_index_no_global_attn_nonzero[0], :, is_local_index_no_global_attn_nonzero[1], : - ] = -10000.0 - - global_attn_scores = global_attn_scores.masked_fill( - is_index_masked[:, None, None, :], - -10000.0, - ) - - global_attn_scores = global_attn_scores.view( - batch_size * self.num_heads, max_num_global_attn_indices, seq_len) - - # compute global attn probs - global_attn_probs_float = nn.functional.softmax( - global_attn_scores, dim=-1, dtype=torch.float32 - ) # use fp32 for numerical stability - - # apply layer head masking - if layer_head_mask is not None: - assert layer_head_mask.size() == ( - self.num_heads, - ), f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}" - global_attn_probs_float = layer_head_mask.view(1, -1, 1, 1) * global_attn_probs_float.view( - batch_size, self.num_heads, max_num_global_attn_indices, seq_len - ) - global_attn_probs_float = global_attn_probs_float.view( - batch_size * self.num_heads, max_num_global_attn_indices, seq_len - ) - - global_attn_probs = nn.functional.dropout( - global_attn_probs_float.type_as(global_attn_scores), p=self.dropout, training=self.training - ) - - # global attn output - global_attn_output = torch.bmm(global_attn_probs, global_value_vectors) - - assert list(global_attn_output.size()) == [ - batch_size * self.num_heads, - max_num_global_attn_indices, - self.head_dim, - ], f"global_attn_output tensor has the wrong size. Size should be {(batch_size * self.num_heads, max_num_global_attn_indices, self.head_dim)}, but is {global_attn_output.size()}." - - global_attn_probs = global_attn_probs.view( - batch_size, self.num_heads, max_num_global_attn_indices, seq_len) - global_attn_output = global_attn_output.view( - batch_size, self.num_heads, max_num_global_attn_indices, self.head_dim - ) - return global_attn_output, global_attn_probs - - def get_extended_attention_mask(self, attention_mask, input_shape, device): - """ - Makes broadcastable attention and causal masks so that future and masked tokens are ignored. - - Arguments: - attention_mask (:obj:`torch.Tensor`): - Mask with ones indicating tokens to attend to, zeros for tokens to ignore. - input_shape (:obj:`Tuple[int]`): - The shape of the input to the model. - device: (:obj:`torch.device`): - The device of the input to the model. - - Returns: - :obj:`torch.Tensor` The extended attention mask, with a the same dtype as :obj:`attention_mask.dtype`. - """ - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - - ones = torch.ones_like(attention_mask) - zero = torch.zeros_like(attention_mask) - attention_mask = torch.where(attention_mask < 0, zero, ones) - - if attention_mask.dim() == 3: - extended_attention_mask = attention_mask[:, None, :, :] - elif attention_mask.dim() == 2: - extended_attention_mask = attention_mask[:, None, None, :] - else: - raise ValueError( - f"Wrong shape for input_ids (shape {input_shape}) or attention_mask (shape {attention_mask.shape})" - ) - - # Since attention_mask is 1.0 for positions we want to attend and 0.0 for - # masked positions, this operation will create a tensor which is 0.0 for - # positions we want to attend and -10000.0 for masked positions. - # Since we are adding it to the raw scores before the softmax, this is - # effectively the same as removing these entirely. - # extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility - extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0 - return extended_attention_mask - - -# Copied from transformers.models.bert.modeling_bert.BertSelfOutput -class LongformerSelfOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm( - config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states, input_tensor): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - return hidden_states - - -class LongformerAttention(nn.Module): - def __init__(self, config, layer_id=0): - super().__init__() - self.self = LongformerSelfAttention(config, layer_id) - self.output = LongformerSelfOutput(config) - self.pruned_heads = set() - - def prune_heads(self, heads): - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads - ) - - # Prune linear layers - self.self.query = prune_linear_layer(self.self.query, index) - self.self.key = prune_linear_layer(self.self.key, index) - self.self.value = prune_linear_layer(self.self.value, index) - self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) - - # Update hyper params and store pruned heads - self.self.num_attention_heads = self.self.num_attention_heads - \ - len(heads) - self.self.all_head_size = self.self.attention_head_size * \ - self.self.num_attention_heads - self.pruned_heads = self.pruned_heads.union(heads) - - def forward( - self, - hidden_states, - attention_mask=None, - layer_head_mask=None, - is_index_masked=None, - is_index_global_attn=None, - is_global_attn=None, - output_attentions=False, - ): - self_outputs = self.self( - hidden_states, - attention_mask=attention_mask, - layer_head_mask=layer_head_mask, - is_index_masked=is_index_masked, - is_index_global_attn=is_index_global_attn, - is_global_attn=is_global_attn, - output_attentions=output_attentions, - ) - attn_output = self.output(self_outputs[0], hidden_states) - outputs = (attn_output,) + self_outputs[1:] - return outputs - - -# Copied from transformers.models.bert.modeling_bert.BertIntermediate -class LongformerIntermediate(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.intermediate_size) - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = ACT2FN[config.hidden_act] - else: - self.intermediate_act_fn = config.hidden_act - - def forward(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.intermediate_act_fn(hidden_states) - return hidden_states - - -# Copied from transformers.models.bert.modeling_bert.BertOutput -class LongformerOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.intermediate_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm( - config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states, input_tensor): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - return hidden_states - - -class LongformerLayer(nn.Module): - def __init__(self, config, layer_id=0): - super().__init__() - self.attention = LongformerAttention(config, layer_id) - self.intermediate = LongformerIntermediate(config) - self.output = LongformerOutput(config) - self.chunk_size_feed_forward = config.chunk_size_feed_forward - self.seq_len_dim = 1 - - def forward( - self, - hidden_states, - attention_mask=None, - layer_head_mask=None, - is_index_masked=None, - is_index_global_attn=None, - is_global_attn=None, - output_attentions=False, - ): - self_attn_outputs = self.attention( - hidden_states, - attention_mask=attention_mask, - layer_head_mask=layer_head_mask, - is_index_masked=is_index_masked, - is_index_global_attn=is_index_global_attn, - is_global_attn=is_global_attn, - output_attentions=output_attentions, - ) - attn_output = self_attn_outputs[0] - outputs = self_attn_outputs[1:] - - layer_output = apply_chunking_to_forward( - self.ff_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attn_output - ) - outputs = (layer_output,) + outputs - return outputs - - def ff_chunk(self, attn_output): - intermediate_output = self.intermediate(attn_output) - layer_output = self.output(intermediate_output, attn_output) - return layer_output - - -class LongformerEncoder(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - self.layer = nn.ModuleList( - [LongformerLayer(config, layer_id=i) for i in range(config.num_hidden_layers)]) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - output_attentions=False, - output_hidden_states=False, - return_dict=True, - ): - - is_index_masked = attention_mask < 0 - is_index_global_attn = attention_mask > 0 - is_global_attn = is_index_global_attn.flatten().any().item() - - all_hidden_states = () if output_hidden_states else None - # All local attentions. - all_attentions = () if output_attentions else None - all_global_attentions = () if (output_attentions and is_global_attn) else None - - # check if head_mask has a correct number of layers specified if desired - if head_mask is not None: - assert head_mask.size()[0] == ( - len(self.layer) - ), f"The head_mask should be specified for {len(self.layer)} layers, but it is for {head_mask.size()[0]}." - for idx, layer_module in enumerate(self.layer): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if getattr(self.config, "gradient_checkpointing", False) and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, is_global_attn, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(layer_module), - hidden_states, - attention_mask, - head_mask[idx] if head_mask is not None else None, - is_index_masked, - is_index_global_attn, - ) - else: - layer_outputs = layer_module( - hidden_states, - attention_mask=attention_mask, - layer_head_mask=head_mask[idx] if head_mask is not None else None, - is_index_masked=is_index_masked, - is_index_global_attn=is_index_global_attn, - is_global_attn=is_global_attn, - output_attentions=output_attentions, - ) - hidden_states = layer_outputs[0] - - if output_attentions: - # bzs x seq_len x num_attn_heads x (num_global_attn + attention_window_len + 1) => bzs x num_attn_heads x seq_len x (num_global_attn + attention_window_len + 1) - all_attentions = all_attentions + \ - (layer_outputs[1].transpose(1, 2),) - - if is_global_attn: - # bzs x num_attn_heads x num_global_attn x seq_len => bzs x num_attn_heads x seq_len x num_global_attn - all_global_attentions = all_global_attentions + \ - (layer_outputs[2].transpose(2, 3),) - - # Add last layer - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple( - v for v in [hidden_states, all_hidden_states, all_attentions, all_global_attentions] if v is not None - ) - return LongformerBaseModelOutput( - last_hidden_state=hidden_states, - hidden_states=all_hidden_states, - attentions=all_attentions, - global_attentions=all_global_attentions, - ) - - -# Copied from transformers.models.bert.modeling_bert.BertPooler -class LongformerPooler(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.activation = nn.Tanh() - - def forward(self, hidden_states): - # We "pool" the model by simply taking the hidden state corresponding - # to the first token. - first_token_tensor = hidden_states[:, 0] - pooled_output = self.dense(first_token_tensor) - pooled_output = self.activation(pooled_output) - return pooled_output - - -# Copied from transformers.models.roberta.modeling_roberta.RobertaLMHead with Roberta->Longformer -class LongformerLMHead(nn.Module): - """Longformer Head for masked language modeling.""" - - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.layer_norm = nn.LayerNorm( - config.hidden_size, eps=config.layer_norm_eps) - - self.decoder = nn.Linear(config.hidden_size, config.vocab_size) - self.bias = nn.Parameter(torch.zeros(config.vocab_size)) - self.decoder.bias = self.bias - - def forward(self, features, **kwargs): - x = self.dense(features) - x = gelu(x) - x = self.layer_norm(x) - - # project back to size of vocabulary with bias - x = self.decoder(x) - - return x - - def _tie_weights(self): - # To tie those two weights if they get disconnected (on TPU or when the bias is resized) - self.bias = self.decoder.bias - - -class LongformerPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = LongformerConfig - base_model_prefix = "longformer" - _keys_to_ignore_on_load_missing = [r"position_ids"] - - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, nn.Linear): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_( - mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_( - mean=0.0, std=self.config.initializer_range) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - -LONGFORMER_START_DOCSTRING = r""" - - This model inherits from :class:`~transformers.PreTrainedModel`. Check the superclass documentation for the generic - methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, - pruning heads etc.) - - This model is also a PyTorch `torch.nn.Module `__ - subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to - general usage and behavior. - - Parameters: - config (:class:`~transformers.LongformerConfig`): Model configuration class with all the parameters of the - model. Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model - weights. -""" - -LONGFORMER_INPUTS_DOCSTRING = r""" - Args: - input_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using :class:`~transformers.LongformerTokenizer`. See - :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for - details. - - `What are input IDs? <../glossary.html#input-ids>`__ - attention_mask (:obj:`torch.FloatTensor` of shape :obj:`({0})`, `optional`): - Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - `What are attention masks? <../glossary.html#attention-mask>`__ - global_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`({0})`, `optional`): - Mask to decide the attention given on each token, local attention or global attention. Tokens with global - attention attends to all other tokens, and all other tokens attend to them. This is important for - task-specific finetuning because it makes the model more flexible at representing the task. For example, - for classification, the token should be given global attention. For QA, all question tokens should also - have global attention. Please refer to the `Longformer paper `__ for more - details. Mask values selected in ``[0, 1]``: - - - 0 for local attention (a sliding window attention), - - 1 for global attention (tokens that attend to all other tokens, and all other tokens attend to them). - - head_mask (:obj:`torch.Tensor` of shape :obj:`(num_layers, num_heads)`, `optional`): - Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in ``[0, 1]``: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - decoder_head_mask (:obj:`torch.Tensor` of shape :obj:`(num_layers, num_heads)`, `optional`): - Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in ``[0, 1]``: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - token_type_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in ``[0, - 1]``: - - - 0 corresponds to a `sentence A` token, - - 1 corresponds to a `sentence B` token. - - `What are token type IDs? <../glossary.html#token-type-ids>`_ - position_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range ``[0, - config.max_position_embeddings - 1]``. - - `What are position IDs? <../glossary.html#position-ids>`_ - inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`({0}, hidden_size)`, `optional`): - Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation. - This is useful if you want more control over how to convert :obj:`input_ids` indices into associated - vectors than the model's internal embedding lookup matrix. - output_attentions (:obj:`bool`, `optional`): - Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned - tensors for more detail. - output_hidden_states (:obj:`bool`, `optional`): - Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for - more detail. - return_dict (:obj:`bool`, `optional`): - Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare Longformer Model outputting raw hidden-states without any specific head on top.", - LONGFORMER_START_DOCSTRING, -) -class LongformerModel(LongformerPreTrainedModel): - """ - This class copied code from :class:`~transformers.RobertaModel` and overwrote standard self-attention with - longformer self-attention to provide the ability to process long sequences following the self-attention approach - described in `Longformer: the Long-Document Transformer `__ by Iz Beltagy, - Matthew E. Peters, and Arman Cohan. Longformer self-attention combines a local (sliding window) and global - attention to extend to long documents without the O(n^2) increase in memory and compute. - - The self-attention module :obj:`LongformerSelfAttention` implemented here supports the combination of local and - global attention but it lacks support for autoregressive attention and dilated attention. Autoregressive and - dilated attention are more relevant for autoregressive language modeling than finetuning on downstream tasks. - Future release will add support for autoregressive attention, but the support for dilated attention requires a - custom CUDA kernel to be memory and compute efficient. - - """ - - def __init__(self, config, add_pooling_layer=True): - super().__init__(config) - self.config = config - - if isinstance(config.attention_window, int): - assert config.attention_window % 2 == 0, "`config.attention_window` has to be an even value" - assert config.attention_window > 0, "`config.attention_window` has to be positive" - config.attention_window = [ - config.attention_window] * config.num_hidden_layers # one value per layer - else: - assert len(config.attention_window) == config.num_hidden_layers, ( - "`len(config.attention_window)` should equal `config.num_hidden_layers`. " - f"Expected {config.num_hidden_layers}, given {len(config.attention_window)}" - ) - - self.embeddings = LongformerEmbeddings(config) - self.encoder = LongformerEncoder(config) - self.pooler = LongformerPooler(config) if add_pooling_layer else None - - self.init_weights() - - def get_input_embeddings(self): - return self.embeddings.word_embeddings - - def set_input_embeddings(self, value): - self.embeddings.word_embeddings = value - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - for layer, heads in heads_to_prune.items(): - self.encoder.layer[layer].attention.prune_heads(heads) - - def _pad_to_window_size( - self, - input_ids: torch.Tensor, - attention_mask: torch.Tensor, - token_type_ids: torch.Tensor, - position_ids: torch.Tensor, - inputs_embeds: torch.Tensor, - pad_token_id: int, - ): - """A helper function to pad tokens and mask to work with implementation of Longformer self-attention.""" - # padding - attention_window = ( - self.config.attention_window - if isinstance(self.config.attention_window, int) - else max(self.config.attention_window) - ) - - assert attention_window % 2 == 0, f"`attention_window` should be an even value. Given {attention_window}" - input_shape = input_ids.shape if input_ids is not None else inputs_embeds.shape - batch_size, seq_len = input_shape[:2] - - padding_len = (attention_window - seq_len % - attention_window) % attention_window - if padding_len > 0: - logger.info( - f"Input ids are automatically padded from {seq_len} to {seq_len + padding_len} to be a multiple of " - f"`config.attention_window`: {attention_window}" - ) - if input_ids is not None: - input_ids = nn.functional.pad( - input_ids, (0, padding_len), value=pad_token_id) - if position_ids is not None: - # pad with position_id = pad_token_id as in modeling_roberta.RobertaEmbeddings - position_ids = nn.functional.pad( - position_ids, (0, padding_len), value=pad_token_id) - if inputs_embeds is not None: - input_ids_padding = inputs_embeds.new_full( - (batch_size, padding_len), - self.config.pad_token_id, - dtype=torch.long, - ) - inputs_embeds_padding = self.embeddings(input_ids_padding) - inputs_embeds = torch.cat( - [inputs_embeds, inputs_embeds_padding], dim=-2) - - attention_mask = nn.functional.pad( - attention_mask, (0, padding_len), value=False - ) # no attention on the padding tokens - token_type_ids = nn.functional.pad( - token_type_ids, (0, padding_len), value=0) # pad with token_type_id = 0 - - return padding_len, input_ids, attention_mask, token_type_ids, position_ids, inputs_embeds - - def _merge_to_attention_mask(self, attention_mask: torch.Tensor, global_attention_mask: torch.Tensor): - # longformer self attention expects attention mask to have 0 (no attn), 1 (local attn), 2 (global attn) - # (global_attention_mask + 1) => 1 for local attention, 2 for global attention - # => final attention_mask => 0 for no attention, 1 for local attention 2 for global attention - if attention_mask is not None: - attention_mask = attention_mask * (global_attention_mask + 1) - else: - # simply use `global_attention_mask` as `attention_mask` - # if no `attention_mask` is given - attention_mask = global_attention_mask + 1 - return attention_mask - - @add_start_docstrings_to_model_forward(LONGFORMER_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=LongformerBaseModelOutputWithPooling, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids=None, - attention_mask=None, - global_attention_mask=None, - head_mask=None, - token_type_ids=None, - position_ids=None, - inputs_embeds=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - r""" - - Returns: - - Examples:: - - >>> import torch - >>> from transformers import LongformerModel, LongformerTokenizer - - >>> model = LongformerModel.from_pretrained('allenai/longformer-base-4096') - >>> tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096') - - >>> SAMPLE_TEXT = ' '.join(['Hello world! '] * 1000) # long input document - >>> input_ids = torch.tensor(tokenizer.encode(SAMPLE_TEXT)).unsqueeze(0) # batch of size 1 - - >>> attention_mask = torch.ones(input_ids.shape, dtype=torch.long, device=input_ids.device) # initialize to local attention - >>> global_attention_mask = torch.zeros(input_ids.shape, dtype=torch.long, device=input_ids.device) # initialize to global attention to be deactivated for all tokens - >>> global_attention_mask[:, [1, 4, 21,]] = 1 # Set global attention to random tokens for the sake of this example - ... # Usually, set global attention based on the task. For example, - ... # classification: the token - ... # QA: question tokens - ... # LM: potentially on the beginning of sentences and paragraphs - >>> outputs = model(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask) - >>> sequence_output = outputs.last_hidden_state - >>> pooled_output = outputs.pooler_output - """ - - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if input_ids is not None and inputs_embeds is not None: - raise ValueError( - "You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = input_ids.size() - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - else: - raise ValueError( - "You have to specify either input_ids or inputs_embeds") - - device = input_ids.device if input_ids is not None else inputs_embeds.device - - if attention_mask is None: - attention_mask = torch.ones(input_shape, device=device) - if token_type_ids is None: - token_type_ids = torch.zeros( - input_shape, dtype=torch.long, device=device) - - # merge `global_attention_mask` and `attention_mask` - if global_attention_mask is not None: - attention_mask = self._merge_to_attention_mask( - attention_mask, global_attention_mask) - - if self.config.use_sparse_attention: - padding_len, input_ids, attention_mask, token_type_ids, position_ids, inputs_embeds = self._pad_to_window_size( - input_ids=input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - inputs_embeds=inputs_embeds, - pad_token_id=self.config.pad_token_id, - ) - - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device)[ - :, 0, 0, : - ] - - embedding_output = self.embeddings( - input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds - ) - - encoder_outputs = self.encoder( - embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - sequence_output = encoder_outputs[0] - pooled_output = self.pooler( - sequence_output) if self.pooler is not None else None - - # undo padding - if self.config.use_sparse_attention: - if padding_len > 0: - # unpad `sequence_output` because the calling function is expecting a length == input_ids.size(1) - sequence_output = sequence_output[:, :-padding_len] - - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] - - return LongformerBaseModelOutputWithPooling( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - global_attentions=encoder_outputs.global_attentions, - ) - - -@add_start_docstrings("""Longformer Model with a `language modeling` head on top. """, LONGFORMER_START_DOCSTRING) -class LongformerForMaskedLM(LongformerPreTrainedModel): - - _keys_to_ignore_on_load_unexpected = [r"pooler"] - - def __init__(self, config): - super().__init__(config) - - self.longformer = LongformerModel(config, add_pooling_layer=False) - self.lm_head = LongformerLMHead(config) - - self.init_weights() - - def get_output_embeddings(self): - return self.lm_head.decoder - - def set_output_embeddings(self, new_embeddings): - self.lm_head.decoder = new_embeddings - - @add_start_docstrings_to_model_forward(LONGFORMER_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=LongformerMaskedLMOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids=None, - attention_mask=None, - global_attention_mask=None, - head_mask=None, - token_type_ids=None, - position_ids=None, - inputs_embeds=None, - labels=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - r""" - labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ..., - config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are ignored - (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]`` - kwargs (:obj:`Dict[str, any]`, optional, defaults to `{}`): - Used to hide legacy arguments that have been deprecated. - - Returns: - - Examples:: - - >>> import torch - >>> from transformers import LongformerForMaskedLM, LongformerTokenizer - - >>> model = LongformerForMaskedLM.from_pretrained('allenai/longformer-base-4096') - >>> tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096') - - >>> SAMPLE_TEXT = ' '.join(['Hello world! '] * 1000) # long input document - >>> input_ids = torch.tensor(tokenizer.encode(SAMPLE_TEXT)).unsqueeze(0) # batch of size 1 - - >>> attention_mask = None # default is local attention everywhere, which is a good choice for MaskedLM - ... # check ``LongformerModel.forward`` for more details how to set `attention_mask` - >>> outputs = model(input_ids, attention_mask=attention_mask, labels=input_ids) - >>> loss = outputs.loss - >>> prediction_logits = output.logits - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.longformer( - input_ids, - attention_mask=attention_mask, - global_attention_mask=global_attention_mask, - head_mask=head_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - sequence_output = outputs[0] - prediction_scores = self.lm_head(sequence_output) - - masked_lm_loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - masked_lm_loss = loss_fct( - prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output - - return LongformerMaskedLMOutput( - loss=masked_lm_loss, - logits=prediction_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - global_attentions=outputs.global_attentions, - ) - - -@add_start_docstrings( - """ - Longformer Model transformer with a sequence classification/regression head on top (a linear layer on top of the - pooled output) e.g. for GLUE tasks. - """, - LONGFORMER_START_DOCSTRING, -) -class LongformerForSequenceClassification(LongformerPreTrainedModel): - - _keys_to_ignore_on_load_unexpected = [r"pooler"] - - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.config = config - - self.longformer = LongformerModel(config, add_pooling_layer=False) - self.classifier = LongformerClassificationHead(config) - - self.init_weights() - - @add_start_docstrings_to_model_forward(LONGFORMER_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=LongformerSequenceClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids=None, - attention_mask=None, - global_attention_mask=None, - head_mask=None, - token_type_ids=None, - position_ids=None, - inputs_embeds=None, - labels=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - r""" - labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`): - Labels for computing the sequence classification/regression loss. Indices should be in :obj:`[0, ..., - config.num_labels - 1]`. If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss), - If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if global_attention_mask is None: - logger.info("Initializing global attention on CLS token...") - global_attention_mask = torch.zeros_like(input_ids) - # global attention on cls token - global_attention_mask[:, 0] = 1 - - outputs = self.longformer( - input_ids, - attention_mask=attention_mask, - global_attention_mask=global_attention_mask, - head_mask=head_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - sequence_output = outputs[0] - logits = self.classifier(sequence_output) - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct( - logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(logits, labels) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return LongformerSequenceClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - global_attentions=outputs.global_attentions, - ) - - -class LongformerClassificationHead(nn.Module): - """Head for sentence-level classification tasks.""" - - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - self.out_proj = nn.Linear(config.hidden_size, config.num_labels) - - def forward(self, hidden_states, **kwargs): - # take token (equiv. to [CLS]) - hidden_states = hidden_states[:, 0, :] - hidden_states = self.dropout(hidden_states) - hidden_states = self.dense(hidden_states) - hidden_states = torch.tanh(hidden_states) - hidden_states = self.dropout(hidden_states) - output = self.out_proj(hidden_states) - return output - - -@add_start_docstrings( - """ - Longformer Model with a span classification head on top for extractive question-answering tasks like SQuAD / - TriviaQA (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - LONGFORMER_START_DOCSTRING, -) -class LongformerForQuestionAnswering(LongformerPreTrainedModel): - - _keys_to_ignore_on_load_unexpected = [r"pooler"] - - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - - self.longformer = LongformerModel(config, add_pooling_layer=False) - self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels) - - self.init_weights() - - @add_start_docstrings_to_model_forward(LONGFORMER_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=LongformerQuestionAnsweringModelOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids=None, - attention_mask=None, - global_attention_mask=None, - head_mask=None, - token_type_ids=None, - position_ids=None, - inputs_embeds=None, - start_positions=None, - end_positions=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - r""" - start_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`): - Labels for position (index) of the start of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (:obj:`sequence_length`). Position outside of the - sequence are not taken into account for computing the loss. - end_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`): - Labels for position (index) of the end of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (:obj:`sequence_length`). Position outside of the - sequence are not taken into account for computing the loss. - - Returns: - - Examples:: - - >>> from transformers import LongformerTokenizer, LongformerForQuestionAnswering - >>> import torch - - >>> tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-large-4096-finetuned-triviaqa") - >>> model = LongformerForQuestionAnswering.from_pretrained("allenai/longformer-large-4096-finetuned-triviaqa") - - >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" - >>> encoding = tokenizer(question, text, return_tensors="pt") - >>> input_ids = encoding["input_ids"] - - >>> # default is local attention everywhere - >>> # the forward method will automatically set global attention on question tokens - >>> attention_mask = encoding["attention_mask"] - - >>> outputs = model(input_ids, attention_mask=attention_mask) - >>> start_logits = outputs.start_logits - >>> end_logits = outputs.end_logits - >>> all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist()) - - >>> answer_tokens = all_tokens[torch.argmax(start_logits) :torch.argmax(end_logits)+1] - >>> answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens)) # remove space prepending space token - - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if global_attention_mask is None: - if input_ids is None: - logger.warning( - "It is not possible to automatically generate the `global_attention_mask` because input_ids is None. Please make sure that it is correctly set." - ) - else: - # set global attention on question tokens automatically - global_attention_mask = _compute_global_attention_mask( - input_ids, self.config.sep_token_id) - - outputs = self.longformer( - input_ids, - attention_mask=attention_mask, - global_attention_mask=global_attention_mask, - head_mask=head_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - logits = self.qa_outputs(sequence_output) - start_logits, end_logits = logits.split(1, dim=-1) - start_logits = start_logits.squeeze(-1).contiguous() - end_logits = end_logits.squeeze(-1).contiguous() - - total_loss = None - if start_positions is not None and end_positions is not None: - # If we are on multi-GPU, split add a dimension - if len(start_positions.size()) > 1: - start_positions = start_positions.squeeze(-1) - if len(end_positions.size()) > 1: - end_positions = end_positions.squeeze(-1) - # sometimes the start/end positions are outside our model inputs, we ignore these terms - ignored_index = start_logits.size(1) - start_positions = start_positions.clamp(0, ignored_index) - end_positions = end_positions.clamp(0, ignored_index) - - loss_fct = CrossEntropyLoss(ignore_index=ignored_index) - start_loss = loss_fct(start_logits, start_positions) - end_loss = loss_fct(end_logits, end_positions) - total_loss = (start_loss + end_loss) / 2 - - if not return_dict: - output = (start_logits, end_logits) + outputs[2:] - return ((total_loss,) + output) if total_loss is not None else output - - return LongformerQuestionAnsweringModelOutput( - loss=total_loss, - start_logits=start_logits, - end_logits=end_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - global_attentions=outputs.global_attentions, - ) - - -@add_start_docstrings( - """ - Longformer Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. - for Named-Entity-Recognition (NER) tasks. - """, - LONGFORMER_START_DOCSTRING, -) -class LongformerForTokenClassification(LongformerPreTrainedModel): - - _keys_to_ignore_on_load_unexpected = [r"pooler"] - - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - - self.longformer = LongformerModel(config, add_pooling_layer=False) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - self.classifier = nn.Linear(config.hidden_size, config.num_labels) - - self.init_weights() - - @add_start_docstrings_to_model_forward(LONGFORMER_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=LongformerTokenClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids=None, - attention_mask=None, - global_attention_mask=None, - head_mask=None, - token_type_ids=None, - position_ids=None, - inputs_embeds=None, - labels=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - r""" - labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Labels for computing the token classification loss. Indices should be in ``[0, ..., config.num_labels - - 1]``. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.longformer( - input_ids, - attention_mask=attention_mask, - global_attention_mask=global_attention_mask, - head_mask=head_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - sequence_output = self.dropout(sequence_output) - logits = self.classifier(sequence_output) - - loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - # Only keep active parts of the loss - if attention_mask is not None: - active_loss = attention_mask.view(-1) == 1 - active_logits = logits.view(-1, self.num_labels) - active_labels = torch.where( - active_loss, labels.view(-1), torch.tensor( - loss_fct.ignore_index).type_as(labels) - ) - loss = loss_fct(active_logits, active_labels) - else: - loss = loss_fct( - logits.view(-1, self.num_labels), labels.view(-1)) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return LongformerTokenClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - global_attentions=outputs.global_attentions, - ) - - -@add_start_docstrings( - """ - Longformer Model with a multiple choice classification head on top (a linear layer on top of the pooled output and - a softmax) e.g. for RocStories/SWAG tasks. - """, - LONGFORMER_START_DOCSTRING, -) -class LongformerForMultipleChoice(LongformerPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.longformer = LongformerModel(config) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - self.classifier = nn.Linear(config.hidden_size, 1) - - self.init_weights() - - @add_start_docstrings_to_model_forward( - LONGFORMER_INPUTS_DOCSTRING.format( - "batch_size, num_choices, sequence_length") - ) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=LongformerMultipleChoiceModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids=None, - token_type_ids=None, - attention_mask=None, - global_attention_mask=None, - head_mask=None, - labels=None, - position_ids=None, - inputs_embeds=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - ): - r""" - labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`): - Labels for computing the multiple choice classification loss. Indices should be in ``[0, ..., - num_choices-1]`` where :obj:`num_choices` is the size of the second dimension of the input tensors. (See - :obj:`input_ids` above) - """ - num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1] - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # set global attention on question tokens - if global_attention_mask is None and input_ids is not None: - logger.info("Initializing global attention on multiple choice...") - # put global attention on all tokens after `config.sep_token_id` - global_attention_mask = torch.stack( - [ - _compute_global_attention_mask( - input_ids[:, i], self.config.sep_token_id, before_sep_token=False) - for i in range(num_choices) - ], - dim=1, - ) - - flat_input_ids = input_ids.view(-1, input_ids.size(-1) - ) if input_ids is not None else None - flat_position_ids = position_ids.view( - -1, position_ids.size(-1)) if position_ids is not None else None - flat_token_type_ids = token_type_ids.view( - -1, token_type_ids.size(-1)) if token_type_ids is not None else None - flat_attention_mask = attention_mask.view( - -1, attention_mask.size(-1)) if attention_mask is not None else None - flat_global_attention_mask = ( - global_attention_mask.view(-1, global_attention_mask.size(-1)) - if global_attention_mask is not None - else None - ) - flat_inputs_embeds = ( - inputs_embeds.view(-1, inputs_embeds.size(-2), - inputs_embeds.size(-1)) - if inputs_embeds is not None - else None - ) - - outputs = self.longformer( - flat_input_ids, - position_ids=flat_position_ids, - token_type_ids=flat_token_type_ids, - attention_mask=flat_attention_mask, - global_attention_mask=flat_global_attention_mask, - head_mask=head_mask, - inputs_embeds=flat_inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - pooled_output = outputs[1] - - pooled_output = self.dropout(pooled_output) - logits = self.classifier(pooled_output) - reshaped_logits = logits.view(-1, num_choices) - - loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - loss = loss_fct(reshaped_logits, labels) - - if not return_dict: - output = (reshaped_logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return LongformerMultipleChoiceModelOutput( - loss=loss, - logits=reshaped_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - global_attentions=outputs.global_attentions, - ) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/gpu/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/tests/gpu/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/glow_tts/texttospeech.py b/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/glow_tts/texttospeech.py deleted file mode 100644 index 3c88925cac0c56e52d35acfa5d6d7e5ce51329c7..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/src/glow_tts/texttospeech.py +++ /dev/null @@ -1,146 +0,0 @@ -from __future__ import absolute_import, division, print_function, unicode_literals -from typing import Tuple - -from scipy.io.wavfile import write -from hifi.env import AttrDict -from hifi.models import Generator - -import numpy as np -import os -import json - -import torch -from text import text_to_sequence -import commons -import models -import utils -import sys -from argparse import ArgumentParser - - -def check_directory(dir): - if not os.path.exists(dir): - sys.exit("Error: {} directory does not exist".format(dir)) - - -class TextToMel: - def __init__(self, glow_model_dir, device="cuda"): - self.glow_model_dir = glow_model_dir - check_directory(self.glow_model_dir) - self.device = device - self.hps, self.glow_tts_model = self.load_glow_tts() - pass - - def load_glow_tts(self): - hps = utils.get_hparams_from_dir(self.glow_model_dir) - checkpoint_path = utils.latest_checkpoint_path(self.glow_model_dir) - symbols = list(hps.data.punc) + list(hps.data.chars) - glow_tts_model = models.FlowGenerator( - len(symbols) + getattr(hps.data, "add_blank", False), - out_channels=hps.data.n_mel_channels, - **hps.model - ) # .to(self.device) - - if self.device == "cuda": - glow_tts_model.to("cuda") - - utils.load_checkpoint(checkpoint_path, glow_tts_model) - glow_tts_model.decoder.store_inverse() - _ = glow_tts_model.eval() - - return hps, glow_tts_model - - def generate_mel(self, text, noise_scale=0.667, length_scale=1.0): - symbols = list(self.hps.data.punc) + list(self.hps.data.chars) - cleaner = self.hps.data.text_cleaners - if getattr(self.hps.data, "add_blank", False): - text_norm = text_to_sequence(text, symbols, cleaner) - text_norm = commons.intersperse(text_norm, len(symbols)) - else: # If not using "add_blank" option during training, adding spaces at the beginning and the end of utterance improves quality - text = " " + text.strip() + " " - text_norm = text_to_sequence(text, symbols, cleaner) - - sequence = np.array(text_norm)[None, :] - - if self.device == "cuda": - x_tst = torch.autograd.Variable(torch.from_numpy(sequence)).cuda().long() - x_tst_lengths = torch.tensor([x_tst.shape[1]]).cuda() - else: - x_tst = torch.autograd.Variable(torch.from_numpy(sequence)).long() - x_tst_lengths = torch.tensor([x_tst.shape[1]]) - - with torch.no_grad(): - (y_gen_tst, *_), *_, (attn_gen, *_) = self.glow_tts_model( - x_tst, - x_tst_lengths, - gen=True, - noise_scale=noise_scale, - length_scale=length_scale, - ) - - return y_gen_tst - #return y_gen_tst.cpu().detach().numpy() - - -class MelToWav: - def __init__(self, hifi_model_dir, device="cuda"): - self.hifi_model_dir = hifi_model_dir - check_directory(self.hifi_model_dir) - self.device = device - self.h, self.hifi_gan_generator = self.load_hifi_gan() - pass - - def load_hifi_gan(self): - checkpoint_path = utils.latest_checkpoint_path(self.hifi_model_dir, regex="g_*") - config_file = os.path.join(self.hifi_model_dir, "config.json") - data = open(config_file).read() - json_config = json.loads(data) - h = AttrDict(json_config) - torch.manual_seed(h.seed) - - generator = Generator(h).to(self.device) - - assert os.path.isfile(checkpoint_path) - print("Loading '{}'".format(checkpoint_path)) - state_dict_g = torch.load(checkpoint_path, map_location=self.device) - print("Complete.") - - generator.load_state_dict(state_dict_g["generator"]) - - generator.eval() - generator.remove_weight_norm() - - return h, generator - - def generate_wav(self, mel): - #mel = torch.FloatTensor(mel).to(self.device) - - y_g_hat = self.hifi_gan_generator(mel.to(self.device)) # passing through vocoder - audio = y_g_hat.squeeze() - audio = audio * 32768.0 - audio = audio.cpu().detach().numpy().astype("int16") - - return audio, self.h.sampling_rate - - - - - -if __name__ == "__main__": - - parser = ArgumentParser() - parser.add_argument("-m", "--model", required=True, type=str) - parser.add_argument("-g", "--gan", required=True, type=str) - parser.add_argument("-d", "--device", type=str, default="cpu") - parser.add_argument("-t", "--text", type=str, required=True) - parser.add_argument("-w", "--wav", type=str, required=True) - - args = parser.parse_args() - - text_to_mel = TextToMel(glow_model_dir=args.model, device=args.device) - mel_to_wav = MelToWav(hifi_model_dir=args.gan, device=args.device) - - mel = text_to_mel.generate_mel(args.text) - audio, sr = mel_to_wav.generate_wav(mel) - - write(filename=args.wav, rate=sr, data=audio) \ No newline at end of file diff --git a/spaces/HenryCarle/your_sport_picker/info.md b/spaces/HenryCarle/your_sport_picker/info.md deleted file mode 100644 index d6143037589c1791a1a313571b9582029bc8c2cb..0000000000000000000000000000000000000000 --- a/spaces/HenryCarle/your_sport_picker/info.md +++ /dev/null @@ -1,18 +0,0 @@ -# 😌 [Edit info.md - Sport Recomender] - -### 🧐 Problem Statement and Research Summary -[Our goal is to make it easier for anyone who wants to play a sport to find a sport they can play and enjoy.] - -### 🎣 Data Collection Plan -[Edit info.md - We collected our data by creating a form that has all of the crucial questions to consider what sport you would like to play, then handed it out to our peers] - -### 💥 Ethical Considerations (Data Privacy and Bias) -* Data privacy: [Edit info.md - Your data would only be used for the sake of improving our AI and creating better recomendations for yourself.] -* Bias: [Edit info.md - Our AI has no known bias that we know of.] - -### 👻 Our Team -[Edit info.md - Erik: I love life, the outdoors, skiing, and chemistry and physics. -Grady: soccer player, bassoonist, video games. -Henry: I like to play games, science, & animals.] - -![aiEDU logo](https://images.squarespace-cdn.com/content/v1/5e4efdef6d10420691f02bc1/5db5a8a3-1761-4fce-a096-bd5f2515162f/aiEDU+_black+logo+stacked.png?format=100w) diff --git a/spaces/HighCWu/GFPGAN-1.3/tests/test_ffhq_degradation_dataset.py b/spaces/HighCWu/GFPGAN-1.3/tests/test_ffhq_degradation_dataset.py deleted file mode 100644 index fa56c03fb8e23df26aa6ed8442a86b3c676eec78..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/GFPGAN-1.3/tests/test_ffhq_degradation_dataset.py +++ /dev/null @@ -1,96 +0,0 @@ -import pytest -import yaml - -from gfpgan.data.ffhq_degradation_dataset import FFHQDegradationDataset - - -def test_ffhq_degradation_dataset(): - - with open('tests/data/test_ffhq_degradation_dataset.yml', mode='r') as f: - opt = yaml.load(f, Loader=yaml.FullLoader) - - dataset = FFHQDegradationDataset(opt) - assert dataset.io_backend_opt['type'] == 'disk' # io backend - assert len(dataset) == 1 # whether to read correct meta info - assert dataset.kernel_list == ['iso', 'aniso'] # correct initialization the degradation configurations - assert dataset.color_jitter_prob == 1 - - # test __getitem__ - result = dataset.__getitem__(0) - # check returned keys - expected_keys = ['gt', 'lq', 'gt_path'] - assert set(expected_keys).issubset(set(result.keys())) - # check shape and contents - assert result['gt'].shape == (3, 512, 512) - assert result['lq'].shape == (3, 512, 512) - assert result['gt_path'] == 'tests/data/gt/00000000.png' - - # ------------------ test with probability = 0 -------------------- # - opt['color_jitter_prob'] = 0 - opt['color_jitter_pt_prob'] = 0 - opt['gray_prob'] = 0 - opt['io_backend'] = dict(type='disk') - dataset = FFHQDegradationDataset(opt) - assert dataset.io_backend_opt['type'] == 'disk' # io backend - assert len(dataset) == 1 # whether to read correct meta info - assert dataset.kernel_list == ['iso', 'aniso'] # correct initialization the degradation configurations - assert dataset.color_jitter_prob == 0 - - # test __getitem__ - result = dataset.__getitem__(0) - # check returned keys - expected_keys = ['gt', 'lq', 'gt_path'] - assert set(expected_keys).issubset(set(result.keys())) - # check shape and contents - assert result['gt'].shape == (3, 512, 512) - assert result['lq'].shape == (3, 512, 512) - assert result['gt_path'] == 'tests/data/gt/00000000.png' - - # ------------------ test lmdb backend -------------------- # - opt['dataroot_gt'] = 'tests/data/ffhq_gt.lmdb' - opt['io_backend'] = dict(type='lmdb') - - dataset = FFHQDegradationDataset(opt) - assert dataset.io_backend_opt['type'] == 'lmdb' # io backend - assert len(dataset) == 1 # whether to read correct meta info - assert dataset.kernel_list == ['iso', 'aniso'] # correct initialization the degradation configurations - assert dataset.color_jitter_prob == 0 - - # test __getitem__ - result = dataset.__getitem__(0) - # check returned keys - expected_keys = ['gt', 'lq', 'gt_path'] - assert set(expected_keys).issubset(set(result.keys())) - # check shape and contents - assert result['gt'].shape == (3, 512, 512) - assert result['lq'].shape == (3, 512, 512) - assert result['gt_path'] == '00000000' - - # ------------------ test with crop_components -------------------- # - opt['crop_components'] = True - opt['component_path'] = 'tests/data/test_eye_mouth_landmarks.pth' - opt['eye_enlarge_ratio'] = 1.4 - opt['gt_gray'] = True - opt['io_backend'] = dict(type='lmdb') - - dataset = FFHQDegradationDataset(opt) - assert dataset.crop_components is True - - # test __getitem__ - result = dataset.__getitem__(0) - # check returned keys - expected_keys = ['gt', 'lq', 'gt_path', 'loc_left_eye', 'loc_right_eye', 'loc_mouth'] - assert set(expected_keys).issubset(set(result.keys())) - # check shape and contents - assert result['gt'].shape == (3, 512, 512) - assert result['lq'].shape == (3, 512, 512) - assert result['gt_path'] == '00000000' - assert result['loc_left_eye'].shape == (4, ) - assert result['loc_right_eye'].shape == (4, ) - assert result['loc_mouth'].shape == (4, ) - - # ------------------ lmdb backend should have paths ends with lmdb -------------------- # - with pytest.raises(ValueError): - opt['dataroot_gt'] = 'tests/data/gt' - opt['io_backend'] = dict(type='lmdb') - dataset = FFHQDegradationDataset(opt) diff --git a/spaces/Hila/RobustViT/SegmentationTest/utils/iou.py b/spaces/Hila/RobustViT/SegmentationTest/utils/iou.py deleted file mode 100644 index 4135e15892849edf40a5cdde95e49bb501cf876f..0000000000000000000000000000000000000000 --- a/spaces/Hila/RobustViT/SegmentationTest/utils/iou.py +++ /dev/null @@ -1,93 +0,0 @@ -import torch -import numpy as np -from . import metric -from .confusionmatrix import ConfusionMatrix - - -class IoU(metric.Metric): - """Computes the intersection over union (IoU) per class and corresponding - mean (mIoU). - - Intersection over union (IoU) is a common evaluation metric for semantic - segmentation. The predictions are first accumulated in a confusion matrix - and the IoU is computed from it as follows: - - IoU = true_positive / (true_positive + false_positive + false_negative). - - Keyword arguments: - - num_classes (int): number of classes in the classification problem - - normalized (boolean, optional): Determines whether or not the confusion - matrix is normalized or not. Default: False. - - ignore_index (int or iterable, optional): Index of the classes to ignore - when computing the IoU. Can be an int, or any iterable of ints. - """ - - def __init__(self, num_classes, normalized=False, ignore_index=None): - super().__init__() - self.conf_metric = ConfusionMatrix(num_classes, normalized) - - if ignore_index is None: - self.ignore_index = None - elif isinstance(ignore_index, int): - self.ignore_index = (ignore_index,) - else: - try: - self.ignore_index = tuple(ignore_index) - except TypeError: - raise ValueError("'ignore_index' must be an int or iterable") - - def reset(self): - self.conf_metric.reset() - - def add(self, predicted, target): - """Adds the predicted and target pair to the IoU metric. - - Keyword arguments: - - predicted (Tensor): Can be a (N, K, H, W) tensor of - predicted scores obtained from the model for N examples and K classes, - or (N, H, W) tensor of integer values between 0 and K-1. - - target (Tensor): Can be a (N, K, H, W) tensor of - target scores for N examples and K classes, or (N, H, W) tensor of - integer values between 0 and K-1. - - """ - # Dimensions check - assert predicted.size(0) == target.size(0), \ - 'number of targets and predicted outputs do not match' - assert predicted.dim() == 3 or predicted.dim() == 4, \ - "predictions must be of dimension (N, H, W) or (N, K, H, W)" - assert target.dim() == 3 or target.dim() == 4, \ - "targets must be of dimension (N, H, W) or (N, K, H, W)" - - # If the tensor is in categorical format convert it to integer format - if predicted.dim() == 4: - _, predicted = predicted.max(1) - if target.dim() == 4: - _, target = target.max(1) - - self.conf_metric.add(predicted.view(-1), target.view(-1)) - - def value(self): - """Computes the IoU and mean IoU. - - The mean computation ignores NaN elements of the IoU array. - - Returns: - Tuple: (IoU, mIoU). The first output is the per class IoU, - for K classes it's numpy.ndarray with K elements. The second output, - is the mean IoU. - """ - conf_matrix = self.conf_metric.value() - if self.ignore_index is not None: - for index in self.ignore_index: - conf_matrix[:, self.ignore_index] = 0 - conf_matrix[self.ignore_index, :] = 0 - true_positive = np.diag(conf_matrix) - false_positive = np.sum(conf_matrix, 0) - true_positive - false_negative = np.sum(conf_matrix, 1) - true_positive - - # Just in case we get a division by 0, ignore/hide the error - with np.errstate(divide='ignore', invalid='ignore'): - iou = true_positive / (true_positive + false_positive + false_negative) - - return iou, np.nanmean(iou) \ No newline at end of file diff --git a/spaces/Hua626/QQsign/README.md b/spaces/Hua626/QQsign/README.md deleted file mode 100644 index bd56881a2a7709591343e2f15af9a6a8133e115b..0000000000000000000000000000000000000000 --- a/spaces/Hua626/QQsign/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: QQsign -emoji: 🦀 -colorFrom: blue -colorTo: purple -sdk: docker -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/diffusionmodules/model.py b/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/diffusionmodules/model.py deleted file mode 100644 index d3a5db6aa2ef915e270f1ae135e4a9918fdd884c..0000000000000000000000000000000000000000 --- a/spaces/HuangLab/CELL-E_2-Image_Prediction/taming/modules/diffusionmodules/model.py +++ /dev/null @@ -1,776 +0,0 @@ -# pytorch_diffusion + derived encoder decoder -import math -import torch -import torch.nn as nn -import numpy as np - - -def get_timestep_embedding(timesteps, embedding_dim): - """ - This matches the implementation in Denoising Diffusion Probabilistic Models: - From Fairseq. - Build sinusoidal embeddings. - This matches the implementation in tensor2tensor, but differs slightly - from the description in Section 3.5 of "Attention Is All You Need". - """ - assert len(timesteps.shape) == 1 - - half_dim = embedding_dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=torch.float32) * -emb) - emb = emb.to(device=timesteps.device) - emb = timesteps.float()[:, None] * emb[None, :] - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1) - if embedding_dim % 2 == 1: # zero pad - emb = torch.nn.functional.pad(emb, (0,1,0,0)) - return emb - - -def nonlinearity(x): - # swish - return x*torch.sigmoid(x) - - -def Normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -class Upsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - x = torch.nn.functional.interpolate(x, scale_factor=2.0, mode="nearest") - if self.with_conv: - x = self.conv(x) - return x - - -class Downsample(nn.Module): - def __init__(self, in_channels, with_conv): - super().__init__() - self.with_conv = with_conv - if self.with_conv: - # no asymmetric padding in torch conv, must do it ourselves - self.conv = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=3, - stride=2, - padding=0) - - def forward(self, x): - if self.with_conv: - pad = (0,1,0,1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - else: - x = torch.nn.functional.avg_pool2d(x, kernel_size=2, stride=2) - return x - - -class ResnetBlock(nn.Module): - def __init__(self, *, in_channels, out_channels=None, conv_shortcut=False, - dropout, temb_channels=512): - super().__init__() - self.in_channels = in_channels - out_channels = in_channels if out_channels is None else out_channels - self.out_channels = out_channels - self.use_conv_shortcut = conv_shortcut - - self.norm1 = Normalize(in_channels) - self.conv1 = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if temb_channels > 0: - self.temb_proj = torch.nn.Linear(temb_channels, - out_channels) - self.norm2 = Normalize(out_channels) - self.dropout = torch.nn.Dropout(dropout) - self.conv2 = torch.nn.Conv2d(out_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - self.conv_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - else: - self.nin_shortcut = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x, temb): - h = x - h = self.norm1(h) - h = nonlinearity(h) - h = self.conv1(h) - - if temb is not None: - h = h + self.temb_proj(nonlinearity(temb))[:,:,None,None] - - h = self.norm2(h) - h = nonlinearity(h) - h = self.dropout(h) - h = self.conv2(h) - - if self.in_channels != self.out_channels: - if self.use_conv_shortcut: - x = self.conv_shortcut(x) - else: - x = self.nin_shortcut(x) - - return x+h - - -class AttnBlock(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = q.reshape(b,c,h*w) - q = q.permute(0,2,1) # b,hw,c - k = k.reshape(b,c,h*w) # b,c,hw - w_ = torch.bmm(q,k) # b,hw,hw w[b,i,j]=sum_c q[b,i,c]k[b,c,j] - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = v.reshape(b,c,h*w) - w_ = w_.permute(0,2,1) # b,hw,hw (first hw of k, second of q) - h_ = torch.bmm(v,w_) # b, c,hw (hw of q) h_[b,c,j] = sum_i v[b,c,i] w_[b,i,j] - h_ = h_.reshape(b,c,h,w) - - h_ = self.proj_out(h_) - - return x+h_ - - -class Model(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, use_timestep=True): - super().__init__() - self.ch = ch - self.temb_ch = self.ch*4 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - self.use_timestep = use_timestep - if self.use_timestep: - # timestep embedding - self.temb = nn.Module() - self.temb.dense = nn.ModuleList([ - torch.nn.Linear(self.ch, - self.temb_ch), - torch.nn.Linear(self.temb_ch, - self.temb_ch), - ]) - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - skip_in = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - if i_block == self.num_res_blocks: - skip_in = ch*in_ch_mult[i_level] - block.append(ResnetBlock(in_channels=block_in+skip_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - - def forward(self, x, t=None): - #assert x.shape[2] == x.shape[3] == self.resolution - - if self.use_timestep: - # timestep embedding - assert t is not None - temb = get_timestep_embedding(t, self.ch) - temb = self.temb.dense[0](temb) - temb = nonlinearity(temb) - temb = self.temb.dense[1](temb) - else: - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block]( - torch.cat([h, hs.pop()], dim=1), temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class Encoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, double_z=True, **ignore_kwargs): - super().__init__() - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - - # downsampling - self.conv_in = torch.nn.Conv2d(in_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - 2*z_channels if double_z else z_channels, - kernel_size=3, - stride=1, - padding=1) - - - def forward(self, x): - #assert x.shape[2] == x.shape[3] == self.resolution, "{}, {}, {}".format(x.shape[2], x.shape[3], self.resolution) - - # timestep embedding - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class Decoder(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, in_channels, - resolution, z_channels, give_pre_end=False, **ignorekwargs): - super().__init__() - self.ch = ch - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.in_channels = in_channels - self.give_pre_end = give_pre_end - - # compute in_ch_mult, block_in and curr_res at lowest res - in_ch_mult = (1,)+tuple(ch_mult) - block_in = ch*ch_mult[self.num_resolutions-1] - curr_res = resolution // 2**(self.num_resolutions-1) - self.z_shape = (1,z_channels,curr_res,curr_res) - print("Working with z of shape {} = {} dimensions.".format( - self.z_shape, np.prod(self.z_shape))) - - # z to block_in - self.conv_in = torch.nn.Conv2d(z_channels, - block_in, - kernel_size=3, - stride=1, - padding=1) - - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, z): - #assert z.shape[1:] == self.z_shape[1:] - self.last_z_shape = z.shape - - # timestep embedding - temb = None - - # z to block_in - h = self.conv_in(z) - - # middle - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block](h, temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - if self.give_pre_end: - return h - - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class VUNet(nn.Module): - def __init__(self, *, ch, out_ch, ch_mult=(1,2,4,8), num_res_blocks, - attn_resolutions, dropout=0.0, resamp_with_conv=True, - in_channels, c_channels, - resolution, z_channels, use_timestep=False, **ignore_kwargs): - super().__init__() - self.ch = ch - self.temb_ch = self.ch*4 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - - self.use_timestep = use_timestep - if self.use_timestep: - # timestep embedding - self.temb = nn.Module() - self.temb.dense = nn.ModuleList([ - torch.nn.Linear(self.ch, - self.temb_ch), - torch.nn.Linear(self.temb_ch, - self.temb_ch), - ]) - - # downsampling - self.conv_in = torch.nn.Conv2d(c_channels, - self.ch, - kernel_size=3, - stride=1, - padding=1) - - curr_res = resolution - in_ch_mult = (1,)+tuple(ch_mult) - self.down = nn.ModuleList() - for i_level in range(self.num_resolutions): - block = nn.ModuleList() - attn = nn.ModuleList() - block_in = ch*in_ch_mult[i_level] - block_out = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks): - block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - down = nn.Module() - down.block = block - down.attn = attn - if i_level != self.num_resolutions-1: - down.downsample = Downsample(block_in, resamp_with_conv) - curr_res = curr_res // 2 - self.down.append(down) - - self.z_in = torch.nn.Conv2d(z_channels, - block_in, - kernel_size=1, - stride=1, - padding=0) - # middle - self.mid = nn.Module() - self.mid.block_1 = ResnetBlock(in_channels=2*block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - self.mid.attn_1 = AttnBlock(block_in) - self.mid.block_2 = ResnetBlock(in_channels=block_in, - out_channels=block_in, - temb_channels=self.temb_ch, - dropout=dropout) - - # upsampling - self.up = nn.ModuleList() - for i_level in reversed(range(self.num_resolutions)): - block = nn.ModuleList() - attn = nn.ModuleList() - block_out = ch*ch_mult[i_level] - skip_in = ch*ch_mult[i_level] - for i_block in range(self.num_res_blocks+1): - if i_block == self.num_res_blocks: - skip_in = ch*in_ch_mult[i_level] - block.append(ResnetBlock(in_channels=block_in+skip_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - if curr_res in attn_resolutions: - attn.append(AttnBlock(block_in)) - up = nn.Module() - up.block = block - up.attn = attn - if i_level != 0: - up.upsample = Upsample(block_in, resamp_with_conv) - curr_res = curr_res * 2 - self.up.insert(0, up) # prepend to get consistent order - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_ch, - kernel_size=3, - stride=1, - padding=1) - - - def forward(self, x, z): - #assert x.shape[2] == x.shape[3] == self.resolution - - if self.use_timestep: - # timestep embedding - assert t is not None - temb = get_timestep_embedding(t, self.ch) - temb = self.temb.dense[0](temb) - temb = nonlinearity(temb) - temb = self.temb.dense[1](temb) - else: - temb = None - - # downsampling - hs = [self.conv_in(x)] - for i_level in range(self.num_resolutions): - for i_block in range(self.num_res_blocks): - h = self.down[i_level].block[i_block](hs[-1], temb) - if len(self.down[i_level].attn) > 0: - h = self.down[i_level].attn[i_block](h) - hs.append(h) - if i_level != self.num_resolutions-1: - hs.append(self.down[i_level].downsample(hs[-1])) - - # middle - h = hs[-1] - z = self.z_in(z) - h = torch.cat((h,z),dim=1) - h = self.mid.block_1(h, temb) - h = self.mid.attn_1(h) - h = self.mid.block_2(h, temb) - - # upsampling - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks+1): - h = self.up[i_level].block[i_block]( - torch.cat([h, hs.pop()], dim=1), temb) - if len(self.up[i_level].attn) > 0: - h = self.up[i_level].attn[i_block](h) - if i_level != 0: - h = self.up[i_level].upsample(h) - - # end - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - - -class SimpleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, *args, **kwargs): - super().__init__() - self.model = nn.ModuleList([nn.Conv2d(in_channels, in_channels, 1), - ResnetBlock(in_channels=in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=2 * in_channels, - out_channels=4 * in_channels, - temb_channels=0, dropout=0.0), - ResnetBlock(in_channels=4 * in_channels, - out_channels=2 * in_channels, - temb_channels=0, dropout=0.0), - nn.Conv2d(2*in_channels, in_channels, 1), - Upsample(in_channels, with_conv=True)]) - # end - self.norm_out = Normalize(in_channels) - self.conv_out = torch.nn.Conv2d(in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - for i, layer in enumerate(self.model): - if i in [1,2,3]: - x = layer(x, None) - else: - x = layer(x) - - h = self.norm_out(x) - h = nonlinearity(h) - x = self.conv_out(h) - return x - - -class UpsampleDecoder(nn.Module): - def __init__(self, in_channels, out_channels, ch, num_res_blocks, resolution, - ch_mult=(2,2), dropout=0.0): - super().__init__() - # upsampling - self.temb_ch = 0 - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - block_in = in_channels - curr_res = resolution // 2 ** (self.num_resolutions - 1) - self.res_blocks = nn.ModuleList() - self.upsample_blocks = nn.ModuleList() - for i_level in range(self.num_resolutions): - res_block = [] - block_out = ch * ch_mult[i_level] - for i_block in range(self.num_res_blocks + 1): - res_block.append(ResnetBlock(in_channels=block_in, - out_channels=block_out, - temb_channels=self.temb_ch, - dropout=dropout)) - block_in = block_out - self.res_blocks.append(nn.ModuleList(res_block)) - if i_level != self.num_resolutions - 1: - self.upsample_blocks.append(Upsample(block_in, True)) - curr_res = curr_res * 2 - - # end - self.norm_out = Normalize(block_in) - self.conv_out = torch.nn.Conv2d(block_in, - out_channels, - kernel_size=3, - stride=1, - padding=1) - - def forward(self, x): - # upsampling - h = x - for k, i_level in enumerate(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = self.res_blocks[i_level][i_block](h, None) - if i_level != self.num_resolutions - 1: - h = self.upsample_blocks[k](h) - h = self.norm_out(h) - h = nonlinearity(h) - h = self.conv_out(h) - return h - diff --git a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/lr_scheduler.py b/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/lr_scheduler.py deleted file mode 100644 index e598ed120159c53da6820a55ad86b89f5c70c82d..0000000000000000000000000000000000000000 --- a/spaces/HuangLab/CELL-E_2-Sequence_Prediction/taming/lr_scheduler.py +++ /dev/null @@ -1,34 +0,0 @@ -import numpy as np - - -class LambdaWarmUpCosineScheduler: - """ - note: use with a base_lr of 1.0 - """ - def __init__(self, warm_up_steps, lr_min, lr_max, lr_start, max_decay_steps, verbosity_interval=0): - self.lr_warm_up_steps = warm_up_steps - self.lr_start = lr_start - self.lr_min = lr_min - self.lr_max = lr_max - self.lr_max_decay_steps = max_decay_steps - self.last_lr = 0. - self.verbosity_interval = verbosity_interval - - def schedule(self, n): - if self.verbosity_interval > 0: - if n % self.verbosity_interval == 0: print(f"current step: {n}, recent lr-multiplier: {self.last_lr}") - if n < self.lr_warm_up_steps: - lr = (self.lr_max - self.lr_start) / self.lr_warm_up_steps * n + self.lr_start - self.last_lr = lr - return lr - else: - t = (n - self.lr_warm_up_steps) / (self.lr_max_decay_steps - self.lr_warm_up_steps) - t = min(t, 1.0) - lr = self.lr_min + 0.5 * (self.lr_max - self.lr_min) * ( - 1 + np.cos(t * np.pi)) - self.last_lr = lr - return lr - - def __call__(self, n): - return self.schedule(n) - diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/same_pad.py b/spaces/ICML2022/OFA/fairseq/fairseq/modules/same_pad.py deleted file mode 100644 index 4c04990ea6fdb291f162ee8ac3d17a92483daf8e..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/modules/same_pad.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from torch import nn - - -class SamePad(nn.Module): - def __init__(self, kernel_size, causal=False): - super().__init__() - if causal: - self.remove = kernel_size - 1 - else: - self.remove = 1 if kernel_size % 2 == 0 else 0 - - def forward(self, x): - if self.remove > 0: - x = x[:, :, : -self.remove] - return x diff --git a/spaces/Izal887/Konci887/infer_pack/commons.py b/spaces/Izal887/Konci887/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/Izal887/Konci887/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/JLD/image-search/README.md b/spaces/JLD/image-search/README.md deleted file mode 100644 index 5350cf5bd95beac262842fa0cae6b7701d95dba8..0000000000000000000000000000000000000000 --- a/spaces/JLD/image-search/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Image Search -emoji: 🌖 -colorFrom: blue -colorTo: pink -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Jamel887/Rv-percobaan887/lib/infer_pack/modules/F0Predictor/F0Predictor.py b/spaces/Jamel887/Rv-percobaan887/lib/infer_pack/modules/F0Predictor/F0Predictor.py deleted file mode 100644 index f56e49e7f0e6eab3babf0711cae2933371b9f9cc..0000000000000000000000000000000000000000 --- a/spaces/Jamel887/Rv-percobaan887/lib/infer_pack/modules/F0Predictor/F0Predictor.py +++ /dev/null @@ -1,16 +0,0 @@ -class F0Predictor(object): - def compute_f0(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length] - """ - pass - - def compute_f0_uv(self, wav, p_len): - """ - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length],uv:[signal_length//hop_length] - """ - pass diff --git a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/monotonic_align/core.c b/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/monotonic_align/core.c deleted file mode 100644 index 5631d20a9a00db29e143a6e8e4e5c378d6bb850a..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/VITS-Umamusume-voice-synthesizer/monotonic_align/core.c +++ /dev/null @@ -1,21299 +0,0 @@ -/* Generated by Cython 0.29.21 */ - -/* BEGIN: Cython Metadata -{ - "distutils": { - "name": "monotonic_align.core", - "sources": [ - "core.pyx" - ] - }, - "module_name": "monotonic_align.core" -} -END: Cython Metadata */ - -#define PY_SSIZE_T_CLEAN -#include "Python.h" -#ifndef Py_PYTHON_H - #error Python headers needed to compile C extensions, please install development version of Python. -#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) - #error Cython requires Python 2.6+ or Python 3.3+. -#else -#define CYTHON_ABI "0_29_21" -#define CYTHON_HEX_VERSION 0x001D15F0 -#define CYTHON_FUTURE_DIVISION 0 -#include -#ifndef offsetof - #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) -#endif -#if !defined(WIN32) && !defined(MS_WINDOWS) - #ifndef __stdcall - #define __stdcall - #endif - #ifndef __cdecl - #define __cdecl - #endif - #ifndef __fastcall - #define __fastcall - #endif -#endif -#ifndef DL_IMPORT - #define DL_IMPORT(t) t -#endif -#ifndef DL_EXPORT - #define DL_EXPORT(t) t -#endif -#define __PYX_COMMA , -#ifndef HAVE_LONG_LONG - #if PY_VERSION_HEX >= 0x02070000 - #define HAVE_LONG_LONG - #endif -#endif -#ifndef PY_LONG_LONG - #define PY_LONG_LONG LONG_LONG -#endif -#ifndef Py_HUGE_VAL - #define Py_HUGE_VAL HUGE_VAL -#endif -#ifdef PYPY_VERSION - #define CYTHON_COMPILING_IN_PYPY 1 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#elif defined(PYSTON_VERSION) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 1 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#else - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 1 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #elif !defined(CYTHON_USE_PYTYPE_LOOKUP) - #define CYTHON_USE_PYTYPE_LOOKUP 1 - #endif - #if PY_MAJOR_VERSION < 3 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #elif !defined(CYTHON_USE_PYLONG_INTERNALS) - #define CYTHON_USE_PYLONG_INTERNALS 1 - #endif - #ifndef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 1 - #endif - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #if PY_VERSION_HEX < 0x030300F0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #elif !defined(CYTHON_USE_UNICODE_WRITER) - #define CYTHON_USE_UNICODE_WRITER 1 - #endif - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #ifndef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 1 - #endif - #ifndef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 1 - #endif - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000) - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1) - #endif - #ifndef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX >= 0x030600B1) - #endif - #ifndef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3) - #endif -#endif -#if !defined(CYTHON_FAST_PYCCALL) -#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) -#endif -#if CYTHON_USE_PYLONG_INTERNALS - #include "longintrepr.h" - #undef SHIFT - #undef BASE - #undef MASK - #ifdef SIZEOF_VOID_P - enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) }; - #endif -#endif -#ifndef __has_attribute - #define __has_attribute(x) 0 -#endif -#ifndef __has_cpp_attribute - #define __has_cpp_attribute(x) 0 -#endif -#ifndef CYTHON_RESTRICT - #if defined(__GNUC__) - #define CYTHON_RESTRICT __restrict__ - #elif defined(_MSC_VER) && _MSC_VER >= 1400 - #define CYTHON_RESTRICT __restrict - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_RESTRICT restrict - #else - #define CYTHON_RESTRICT - #endif -#endif -#ifndef CYTHON_UNUSED -# if defined(__GNUC__) -# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_MAYBE_UNUSED_VAR -# if defined(__cplusplus) - template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { } -# else -# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x) -# endif -#endif -#ifndef CYTHON_NCP_UNUSED -# if CYTHON_COMPILING_IN_CPYTHON -# define CYTHON_NCP_UNUSED -# else -# define CYTHON_NCP_UNUSED CYTHON_UNUSED -# endif -#endif -#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) -#ifdef _MSC_VER - #ifndef _MSC_STDINT_H_ - #if _MSC_VER < 1300 - typedef unsigned char uint8_t; - typedef unsigned int uint32_t; - #else - typedef unsigned __int8 uint8_t; - typedef unsigned __int32 uint32_t; - #endif - #endif -#else - #include -#endif -#ifndef CYTHON_FALLTHROUGH - #if defined(__cplusplus) && __cplusplus >= 201103L - #if __has_cpp_attribute(fallthrough) - #define CYTHON_FALLTHROUGH [[fallthrough]] - #elif __has_cpp_attribute(clang::fallthrough) - #define CYTHON_FALLTHROUGH [[clang::fallthrough]] - #elif __has_cpp_attribute(gnu::fallthrough) - #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_attribute(fallthrough) - #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) - #else - #define CYTHON_FALLTHROUGH - #endif - #endif - #if defined(__clang__ ) && defined(__apple_build_version__) - #if __apple_build_version__ < 7000000 - #undef CYTHON_FALLTHROUGH - #define CYTHON_FALLTHROUGH - #endif - #endif -#endif - -#ifndef CYTHON_INLINE - #if defined(__clang__) - #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) - #elif defined(__GNUC__) - #define CYTHON_INLINE __inline__ - #elif defined(_MSC_VER) - #define CYTHON_INLINE __inline - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_INLINE inline - #else - #define CYTHON_INLINE - #endif -#endif - -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag) - #define Py_OptimizeFlag 0 -#endif -#define __PYX_BUILD_PY_SSIZE_T "n" -#define CYTHON_FORMAT_SSIZE_T "z" -#if PY_MAJOR_VERSION < 3 - #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) - #define __Pyx_DefaultClassType PyClass_Type -#else - #define __Pyx_BUILTIN_MODULE_NAME "builtins" -#if PY_VERSION_HEX >= 0x030800A4 && PY_VERSION_HEX < 0x030800B2 - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, 0, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#else - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#endif - #define __Pyx_DefaultClassType PyType_Type -#endif -#ifndef Py_TPFLAGS_CHECKTYPES - #define Py_TPFLAGS_CHECKTYPES 0 -#endif -#ifndef Py_TPFLAGS_HAVE_INDEX - #define Py_TPFLAGS_HAVE_INDEX 0 -#endif -#ifndef Py_TPFLAGS_HAVE_NEWBUFFER - #define Py_TPFLAGS_HAVE_NEWBUFFER 0 -#endif -#ifndef Py_TPFLAGS_HAVE_FINALIZE - #define Py_TPFLAGS_HAVE_FINALIZE 0 -#endif -#ifndef METH_STACKLESS - #define METH_STACKLESS 0 -#endif -#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL) - #ifndef METH_FASTCALL - #define METH_FASTCALL 0x80 - #endif - typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs); - typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args, - Py_ssize_t nargs, PyObject *kwnames); -#else - #define __Pyx_PyCFunctionFast _PyCFunctionFast - #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords -#endif -#if CYTHON_FAST_PYCCALL -#define __Pyx_PyFastCFunction_Check(func)\ - ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))))) -#else -#define __Pyx_PyFastCFunction_Check(func) 0 -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) - #define PyObject_Malloc(s) PyMem_Malloc(s) - #define PyObject_Free(p) PyMem_Free(p) - #define PyObject_Realloc(p) PyMem_Realloc(p) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1 - #define PyMem_RawMalloc(n) PyMem_Malloc(n) - #define PyMem_RawRealloc(p, n) PyMem_Realloc(p, n) - #define PyMem_RawFree(p) PyMem_Free(p) -#endif -#if CYTHON_COMPILING_IN_PYSTON - #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno) -#else - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) -#endif -#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#elif PY_VERSION_HEX >= 0x03060000 - #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() -#elif PY_VERSION_HEX >= 0x03000000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#else - #define __Pyx_PyThreadState_Current _PyThreadState_Current -#endif -#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT) -#include "pythread.h" -#define Py_tss_NEEDS_INIT 0 -typedef int Py_tss_t; -static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) { - *key = PyThread_create_key(); - return 0; -} -static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) { - Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t)); - *key = Py_tss_NEEDS_INIT; - return key; -} -static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) { - PyObject_Free(key); -} -static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) { - return *key != Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) { - PyThread_delete_key(*key); - *key = Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) { - return PyThread_set_key_value(*key, value); -} -static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) { - return PyThread_get_key_value(*key); -} -#endif -#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) -#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) -#else -#define __Pyx_PyDict_NewPresized(n) PyDict_New() -#endif -#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION - #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) -#else - #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS -#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash) -#else -#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name) -#endif -#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) - #define CYTHON_PEP393_ENABLED 1 - #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ - 0 : _PyUnicode_Ready((PyObject *)(op))) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) - #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u) - #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) - #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch) - #if defined(PyUnicode_IS_READY) && defined(PyUnicode_GET_SIZE) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u)) - #endif -#else - #define CYTHON_PEP393_ENABLED 0 - #define PyUnicode_1BYTE_KIND 1 - #define PyUnicode_2BYTE_KIND 2 - #define PyUnicode_4BYTE_KIND 4 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111) - #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE)) - #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) -#else - #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ - PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains) - #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check) - #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format) - #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) -#endif -#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) -#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) -#else - #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) -#endif -#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) - #define PyObject_ASCII(o) PyObject_Repr(o) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBaseString_Type PyUnicode_Type - #define PyStringObject PyUnicodeObject - #define PyString_Type PyUnicode_Type - #define PyString_Check PyUnicode_Check - #define PyString_CheckExact PyUnicode_CheckExact -#ifndef PyObject_Unicode - #define PyObject_Unicode PyObject_Str -#endif -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) - #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) -#else - #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) - #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) -#endif -#ifndef PySet_CheckExact - #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type) -#endif -#if PY_VERSION_HEX >= 0x030900A4 - #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size) -#else - #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size) -#endif -#if CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq) -#else - #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyIntObject PyLongObject - #define PyInt_Type PyLong_Type - #define PyInt_Check(op) PyLong_Check(op) - #define PyInt_CheckExact(op) PyLong_CheckExact(op) - #define PyInt_FromString PyLong_FromString - #define PyInt_FromUnicode PyLong_FromUnicode - #define PyInt_FromLong PyLong_FromLong - #define PyInt_FromSize_t PyLong_FromSize_t - #define PyInt_FromSsize_t PyLong_FromSsize_t - #define PyInt_AsLong PyLong_AsLong - #define PyInt_AS_LONG PyLong_AS_LONG - #define PyInt_AsSsize_t PyLong_AsSsize_t - #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask - #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask - #define PyNumber_Int PyNumber_Long -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBoolObject PyLongObject -#endif -#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY - #ifndef PyUnicode_InternFromString - #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) - #endif -#endif -#if PY_VERSION_HEX < 0x030200A4 - typedef long Py_hash_t; - #define __Pyx_PyInt_FromHash_t PyInt_FromLong - #define __Pyx_PyInt_AsHash_t PyInt_AsLong -#else - #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t - #define __Pyx_PyInt_AsHash_t PyInt_AsSsize_t -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyMethod_New(func, self, klass) ((self) ? ((void)(klass), PyMethod_New(func, self)) : __Pyx_NewRef(func)) -#else - #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass) -#endif -#if CYTHON_USE_ASYNC_SLOTS - #if PY_VERSION_HEX >= 0x030500B1 - #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods - #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) - #else - #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) - #endif -#else - #define __Pyx_PyType_AsAsync(obj) NULL -#endif -#ifndef __Pyx_PyAsyncMethodsStruct - typedef struct { - unaryfunc am_await; - unaryfunc am_aiter; - unaryfunc am_anext; - } __Pyx_PyAsyncMethodsStruct; -#endif - -#if defined(WIN32) || defined(MS_WINDOWS) - #define _USE_MATH_DEFINES -#endif -#include -#ifdef NAN -#define __PYX_NAN() ((float) NAN) -#else -static CYTHON_INLINE float __PYX_NAN() { - float value; - memset(&value, 0xFF, sizeof(value)); - return value; -} -#endif -#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) -#define __Pyx_truncl trunc -#else -#define __Pyx_truncl truncl -#endif - -#define __PYX_MARK_ERR_POS(f_index, lineno) \ - { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; } -#define __PYX_ERR(f_index, lineno, Ln_error) \ - { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; } - -#ifndef __PYX_EXTERN_C - #ifdef __cplusplus - #define __PYX_EXTERN_C extern "C" - #else - #define __PYX_EXTERN_C extern - #endif -#endif - -#define __PYX_HAVE__monotonic_align__core -#define __PYX_HAVE_API__monotonic_align__core -/* Early includes */ -#include "pythread.h" -#include -#include -#include -#include "pystate.h" -#ifdef _OPENMP -#include -#endif /* _OPENMP */ - -#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) -#define CYTHON_WITHOUT_ASSERTIONS -#endif - -typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; - const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; - -#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8) -#define __PYX_DEFAULT_STRING_ENCODING "" -#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString -#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#define __Pyx_uchar_cast(c) ((unsigned char)c) -#define __Pyx_long_cast(x) ((long)x) -#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ - (sizeof(type) < sizeof(Py_ssize_t)) ||\ - (sizeof(type) > sizeof(Py_ssize_t) &&\ - likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX) &&\ - (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ - v == (type)PY_SSIZE_T_MIN))) ||\ - (sizeof(type) == sizeof(Py_ssize_t) &&\ - (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX))) ) -static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { - return (size_t) i < (size_t) limit; -} -#if defined (__cplusplus) && __cplusplus >= 201103L - #include - #define __Pyx_sst_abs(value) std::abs(value) -#elif SIZEOF_INT >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) abs(value) -#elif SIZEOF_LONG >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) labs(value) -#elif defined (_MSC_VER) - #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) -#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define __Pyx_sst_abs(value) llabs(value) -#elif defined (__GNUC__) - #define __Pyx_sst_abs(value) __builtin_llabs(value) -#else - #define __Pyx_sst_abs(value) ((value<0) ? -value : value) -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); -#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) -#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); -#if PY_MAJOR_VERSION < 3 - #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#else - #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize -#endif -#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) -#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) -#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) -#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) -#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) { - const Py_UNICODE *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) -#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode -#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode -#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) -#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); -#define __Pyx_PySequence_Tuple(obj)\ - (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); -#if CYTHON_ASSUME_SAFE_MACROS -#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) -#else -#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) -#endif -#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) -#else -#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) -#endif -#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x)) -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII -static int __Pyx_sys_getdefaultencoding_not_ascii; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - PyObject* ascii_chars_u = NULL; - PyObject* ascii_chars_b = NULL; - const char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - if (strcmp(default_encoding_c, "ascii") == 0) { - __Pyx_sys_getdefaultencoding_not_ascii = 0; - } else { - char ascii_chars[128]; - int c; - for (c = 0; c < 128; c++) { - ascii_chars[c] = c; - } - __Pyx_sys_getdefaultencoding_not_ascii = 1; - ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); - if (!ascii_chars_u) goto bad; - ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); - if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { - PyErr_Format( - PyExc_ValueError, - "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", - default_encoding_c); - goto bad; - } - Py_DECREF(ascii_chars_u); - Py_DECREF(ascii_chars_b); - } - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - Py_XDECREF(ascii_chars_u); - Py_XDECREF(ascii_chars_b); - return -1; -} -#endif -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) -#else -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -static char* __PYX_DEFAULT_STRING_ENCODING; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1); - if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; - strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - return -1; -} -#endif -#endif - - -/* Test for GCC > 2.95 */ -#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) -#else /* !__GNUC__ or GCC < 2.95 */ - #define likely(x) (x) - #define unlikely(x) (x) -#endif /* __GNUC__ */ -static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } - -static PyObject *__pyx_m = NULL; -static PyObject *__pyx_d; -static PyObject *__pyx_b; -static PyObject *__pyx_cython_runtime = NULL; -static PyObject *__pyx_empty_tuple; -static PyObject *__pyx_empty_bytes; -static PyObject *__pyx_empty_unicode; -static int __pyx_lineno; -static int __pyx_clineno = 0; -static const char * __pyx_cfilenm= __FILE__; -static const char *__pyx_filename; - - -static const char *__pyx_f[] = { - "core.pyx", - "stringsource", -}; -/* NoFastGil.proto */ -#define __Pyx_PyGILState_Ensure PyGILState_Ensure -#define __Pyx_PyGILState_Release PyGILState_Release -#define __Pyx_FastGIL_Remember() -#define __Pyx_FastGIL_Forget() -#define __Pyx_FastGilFuncInit() - -/* MemviewSliceStruct.proto */ -struct __pyx_memoryview_obj; -typedef struct { - struct __pyx_memoryview_obj *memview; - char *data; - Py_ssize_t shape[8]; - Py_ssize_t strides[8]; - Py_ssize_t suboffsets[8]; -} __Pyx_memviewslice; -#define __Pyx_MemoryView_Len(m) (m.shape[0]) - -/* Atomics.proto */ -#include -#ifndef CYTHON_ATOMICS - #define CYTHON_ATOMICS 1 -#endif -#define __pyx_atomic_int_type int -#if CYTHON_ATOMICS && __GNUC__ >= 4 && (__GNUC_MINOR__ > 1 ||\ - (__GNUC_MINOR__ == 1 && __GNUC_PATCHLEVEL >= 2)) &&\ - !defined(__i386__) - #define __pyx_atomic_incr_aligned(value, lock) __sync_fetch_and_add(value, 1) - #define __pyx_atomic_decr_aligned(value, lock) __sync_fetch_and_sub(value, 1) - #ifdef __PYX_DEBUG_ATOMICS - #warning "Using GNU atomics" - #endif -#elif CYTHON_ATOMICS && defined(_MSC_VER) && 0 - #include - #undef __pyx_atomic_int_type - #define __pyx_atomic_int_type LONG - #define __pyx_atomic_incr_aligned(value, lock) InterlockedIncrement(value) - #define __pyx_atomic_decr_aligned(value, lock) InterlockedDecrement(value) - #ifdef __PYX_DEBUG_ATOMICS - #pragma message ("Using MSVC atomics") - #endif -#elif CYTHON_ATOMICS && (defined(__ICC) || defined(__INTEL_COMPILER)) && 0 - #define __pyx_atomic_incr_aligned(value, lock) _InterlockedIncrement(value) - #define __pyx_atomic_decr_aligned(value, lock) _InterlockedDecrement(value) - #ifdef __PYX_DEBUG_ATOMICS - #warning "Using Intel atomics" - #endif -#else - #undef CYTHON_ATOMICS - #define CYTHON_ATOMICS 0 - #ifdef __PYX_DEBUG_ATOMICS - #warning "Not using atomics" - #endif -#endif -typedef volatile __pyx_atomic_int_type __pyx_atomic_int; -#if CYTHON_ATOMICS - #define __pyx_add_acquisition_count(memview)\ - __pyx_atomic_incr_aligned(__pyx_get_slice_count_pointer(memview), memview->lock) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_atomic_decr_aligned(__pyx_get_slice_count_pointer(memview), memview->lock) -#else - #define __pyx_add_acquisition_count(memview)\ - __pyx_add_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_sub_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) -#endif - -/* ForceInitThreads.proto */ -#ifndef __PYX_FORCE_INIT_THREADS - #define __PYX_FORCE_INIT_THREADS 0 -#endif - -/* BufferFormatStructs.proto */ -#define IS_UNSIGNED(type) (((type) -1) > 0) -struct __Pyx_StructField_; -#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0) -typedef struct { - const char* name; - struct __Pyx_StructField_* fields; - size_t size; - size_t arraysize[8]; - int ndim; - char typegroup; - char is_unsigned; - int flags; -} __Pyx_TypeInfo; -typedef struct __Pyx_StructField_ { - __Pyx_TypeInfo* type; - const char* name; - size_t offset; -} __Pyx_StructField; -typedef struct { - __Pyx_StructField* field; - size_t parent_offset; -} __Pyx_BufFmt_StackElem; -typedef struct { - __Pyx_StructField root; - __Pyx_BufFmt_StackElem* head; - size_t fmt_offset; - size_t new_count, enc_count; - size_t struct_alignment; - int is_complex; - char enc_type; - char new_packmode; - char enc_packmode; - char is_valid_array; -} __Pyx_BufFmt_Context; - - -/*--- Type declarations ---*/ -struct __pyx_array_obj; -struct __pyx_MemviewEnum_obj; -struct __pyx_memoryview_obj; -struct __pyx_memoryviewslice_obj; -struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each; - -/* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ -struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each { - int __pyx_n; - float max_neg_val; -}; - -/* "View.MemoryView":105 - * - * @cname("__pyx_array") - * cdef class array: # <<<<<<<<<<<<<< - * - * cdef: - */ -struct __pyx_array_obj { - PyObject_HEAD - struct __pyx_vtabstruct_array *__pyx_vtab; - char *data; - Py_ssize_t len; - char *format; - int ndim; - Py_ssize_t *_shape; - Py_ssize_t *_strides; - Py_ssize_t itemsize; - PyObject *mode; - PyObject *_format; - void (*callback_free_data)(void *); - int free_data; - int dtype_is_object; -}; - - -/* "View.MemoryView":279 - * - * @cname('__pyx_MemviewEnum') - * cdef class Enum(object): # <<<<<<<<<<<<<< - * cdef object name - * def __init__(self, name): - */ -struct __pyx_MemviewEnum_obj { - PyObject_HEAD - PyObject *name; -}; - - -/* "View.MemoryView":330 - * - * @cname('__pyx_memoryview') - * cdef class memoryview(object): # <<<<<<<<<<<<<< - * - * cdef object obj - */ -struct __pyx_memoryview_obj { - PyObject_HEAD - struct __pyx_vtabstruct_memoryview *__pyx_vtab; - PyObject *obj; - PyObject *_size; - PyObject *_array_interface; - PyThread_type_lock lock; - __pyx_atomic_int acquisition_count[2]; - __pyx_atomic_int *acquisition_count_aligned_p; - Py_buffer view; - int flags; - int dtype_is_object; - __Pyx_TypeInfo *typeinfo; -}; - - -/* "View.MemoryView":965 - * - * @cname('__pyx_memoryviewslice') - * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<< - * "Internal class for passing memoryview slices to Python" - * - */ -struct __pyx_memoryviewslice_obj { - struct __pyx_memoryview_obj __pyx_base; - __Pyx_memviewslice from_slice; - PyObject *from_object; - PyObject *(*to_object_func)(char *); - int (*to_dtype_func)(char *, PyObject *); -}; - - - -/* "View.MemoryView":105 - * - * @cname("__pyx_array") - * cdef class array: # <<<<<<<<<<<<<< - * - * cdef: - */ - -struct __pyx_vtabstruct_array { - PyObject *(*get_memview)(struct __pyx_array_obj *); -}; -static struct __pyx_vtabstruct_array *__pyx_vtabptr_array; - - -/* "View.MemoryView":330 - * - * @cname('__pyx_memoryview') - * cdef class memoryview(object): # <<<<<<<<<<<<<< - * - * cdef object obj - */ - -struct __pyx_vtabstruct_memoryview { - char *(*get_item_pointer)(struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*is_slice)(struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*setitem_slice_assignment)(struct __pyx_memoryview_obj *, PyObject *, PyObject *); - PyObject *(*setitem_slice_assign_scalar)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*setitem_indexed)(struct __pyx_memoryview_obj *, PyObject *, PyObject *); - PyObject *(*convert_item_to_object)(struct __pyx_memoryview_obj *, char *); - PyObject *(*assign_item_from_object)(struct __pyx_memoryview_obj *, char *, PyObject *); -}; -static struct __pyx_vtabstruct_memoryview *__pyx_vtabptr_memoryview; - - -/* "View.MemoryView":965 - * - * @cname('__pyx_memoryviewslice') - * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<< - * "Internal class for passing memoryview slices to Python" - * - */ - -struct __pyx_vtabstruct__memoryviewslice { - struct __pyx_vtabstruct_memoryview __pyx_base; -}; -static struct __pyx_vtabstruct__memoryviewslice *__pyx_vtabptr__memoryviewslice; - -/* --- Runtime support code (head) --- */ -/* Refnanny.proto */ -#ifndef CYTHON_REFNANNY - #define CYTHON_REFNANNY 0 -#endif -#if CYTHON_REFNANNY - typedef struct { - void (*INCREF)(void*, PyObject*, int); - void (*DECREF)(void*, PyObject*, int); - void (*GOTREF)(void*, PyObject*, int); - void (*GIVEREF)(void*, PyObject*, int); - void* (*SetupContext)(const char*, int, const char*); - void (*FinishContext)(void**); - } __Pyx_RefNannyAPIStruct; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); - #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; -#ifdef WITH_THREAD - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - if (acquire_gil) {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - PyGILState_Release(__pyx_gilstate_save);\ - } else {\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - } -#else - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) -#endif - #define __Pyx_RefNannyFinishContext()\ - __Pyx_RefNanny->FinishContext(&__pyx_refnanny) - #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0) - #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0) - #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0) - #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0) -#else - #define __Pyx_RefNannyDeclarations - #define __Pyx_RefNannySetupContext(name, acquire_gil) - #define __Pyx_RefNannyFinishContext() - #define __Pyx_INCREF(r) Py_INCREF(r) - #define __Pyx_DECREF(r) Py_DECREF(r) - #define __Pyx_GOTREF(r) - #define __Pyx_GIVEREF(r) - #define __Pyx_XINCREF(r) Py_XINCREF(r) - #define __Pyx_XDECREF(r) Py_XDECREF(r) - #define __Pyx_XGOTREF(r) - #define __Pyx_XGIVEREF(r) -#endif -#define __Pyx_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_XDECREF(tmp);\ - } while (0) -#define __Pyx_DECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_DECREF(tmp);\ - } while (0) -#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) -#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) - -/* PyObjectGetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) -#endif - -/* GetBuiltinName.proto */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name); - -/* MemviewSliceInit.proto */ -#define __Pyx_BUF_MAX_NDIMS %(BUF_MAX_NDIMS)d -#define __Pyx_MEMVIEW_DIRECT 1 -#define __Pyx_MEMVIEW_PTR 2 -#define __Pyx_MEMVIEW_FULL 4 -#define __Pyx_MEMVIEW_CONTIG 8 -#define __Pyx_MEMVIEW_STRIDED 16 -#define __Pyx_MEMVIEW_FOLLOW 32 -#define __Pyx_IS_C_CONTIG 1 -#define __Pyx_IS_F_CONTIG 2 -static int __Pyx_init_memviewslice( - struct __pyx_memoryview_obj *memview, - int ndim, - __Pyx_memviewslice *memviewslice, - int memview_is_new_reference); -static CYTHON_INLINE int __pyx_add_acquisition_count_locked( - __pyx_atomic_int *acquisition_count, PyThread_type_lock lock); -static CYTHON_INLINE int __pyx_sub_acquisition_count_locked( - __pyx_atomic_int *acquisition_count, PyThread_type_lock lock); -#define __pyx_get_slice_count_pointer(memview) (memview->acquisition_count_aligned_p) -#define __pyx_get_slice_count(memview) (*__pyx_get_slice_count_pointer(memview)) -#define __PYX_INC_MEMVIEW(slice, have_gil) __Pyx_INC_MEMVIEW(slice, have_gil, __LINE__) -#define __PYX_XDEC_MEMVIEW(slice, have_gil) __Pyx_XDEC_MEMVIEW(slice, have_gil, __LINE__) -static CYTHON_INLINE void __Pyx_INC_MEMVIEW(__Pyx_memviewslice *, int, int); -static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *, int, int); - -/* RaiseArgTupleInvalid.proto */ -static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, - Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); - -/* RaiseDoubleKeywords.proto */ -static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); - -/* ParseKeywords.proto */ -static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\ - PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\ - const char* function_name); - -/* None.proto */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname); - -/* ArgTypeTest.proto */ -#define __Pyx_ArgTypeTest(obj, type, none_allowed, name, exact)\ - ((likely((Py_TYPE(obj) == type) | (none_allowed && (obj == Py_None)))) ? 1 :\ - __Pyx__ArgTypeTest(obj, type, name, exact)) -static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact); - -/* PyObjectCall.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); -#else -#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) -#endif - -/* PyThreadStateGet.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; -#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; -#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type -#else -#define __Pyx_PyThreadState_declare -#define __Pyx_PyThreadState_assign -#define __Pyx_PyErr_Occurred() PyErr_Occurred() -#endif - -/* PyErrFetchRestore.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) -#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) -#else -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#endif -#else -#define __Pyx_PyErr_Clear() PyErr_Clear() -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) -#endif - -/* RaiseException.proto */ -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); - -/* PyCFunctionFastCall.proto */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs); -#else -#define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL) -#endif - -/* PyFunctionFastCall.proto */ -#if CYTHON_FAST_PYCALL -#define __Pyx_PyFunction_FastCall(func, args, nargs)\ - __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs); -#else -#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs) -#endif -#define __Pyx_BUILD_ASSERT_EXPR(cond)\ - (sizeof(char [1 - 2*!(cond)]) - 1) -#ifndef Py_MEMBER_SIZE -#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member) -#endif - static size_t __pyx_pyframe_localsplus_offset = 0; - #include "frameobject.h" - #define __Pxy_PyFrame_Initialize_Offsets()\ - ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\ - (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus))) - #define __Pyx_PyFrame_GetLocalsplus(frame)\ - (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset)) -#endif - -/* PyObjectCall2Args.proto */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2); - -/* PyObjectCallMethO.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); -#endif - -/* PyObjectCallOneArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); - -/* IncludeStringH.proto */ -#include - -/* BytesEquals.proto */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); - -/* UnicodeEquals.proto */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); - -/* StrEquals.proto */ -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyString_Equals __Pyx_PyUnicode_Equals -#else -#define __Pyx_PyString_Equals __Pyx_PyBytes_Equals -#endif - -/* None.proto */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t, Py_ssize_t); - -/* UnaryNegOverflows.proto */ -#define UNARY_NEG_WOULD_OVERFLOW(x)\ - (((x) < 0) & ((unsigned long)(x) == 0-(unsigned long)(x))) - -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *); /*proto*/ -/* GetAttr.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *, PyObject *); - -/* GetItemInt.proto */ -#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\ - __Pyx_GetItemInt_Generic(o, to_py_func(i)))) -#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j); -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, - int is_list, int wraparound, int boundscheck); - -/* ObjectGetItem.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key); -#else -#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key) -#endif - -/* decode_c_string_utf16.proto */ -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = 0; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16LE(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = -1; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16BE(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = 1; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} - -/* decode_c_string.proto */ -static CYTHON_INLINE PyObject* __Pyx_decode_c_string( - const char* cstring, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)); - -/* PyErrExceptionMatches.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) -static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); -#else -#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) -#endif - -/* GetAttr3.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *, PyObject *, PyObject *); - -/* PyDictVersioning.proto */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1) -#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\ - (version_var) = __PYX_GET_DICT_VERSION(dict);\ - (cache_var) = (value); -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\ - (VAR) = __pyx_dict_cached_value;\ - } else {\ - (VAR) = __pyx_dict_cached_value = (LOOKUP);\ - __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\ - }\ -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); -#else -#define __PYX_GET_DICT_VERSION(dict) (0) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP); -#endif - -/* GetModuleGlobalName.proto */ -#if CYTHON_USE_DICT_VERSIONS -#define __Pyx_GetModuleGlobalName(var, name) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\ - (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\ - __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} -#define __Pyx_GetModuleGlobalNameUncached(var, name) {\ - PY_UINT64_T __pyx_dict_version;\ - PyObject *__pyx_dict_cached_value;\ - (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); -#else -#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name) -#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name) -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); -#endif - -/* RaiseTooManyValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); - -/* RaiseNeedMoreValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); - -/* RaiseNoneIterError.proto */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void); - -/* ExtTypeTest.proto */ -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type); - -/* GetTopmostException.proto */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate); -#endif - -/* SaveResetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -#else -#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) -#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) -#endif - -/* GetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* SwapException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* Import.proto */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); - -/* FastTypeChecks.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); -#else -#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) -#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) -#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) -#endif -#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) - -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -/* ListCompAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len)) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x) -#endif - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2)) -#endif - -/* ListExtend.proto */ -static CYTHON_INLINE int __Pyx_PyList_Extend(PyObject* L, PyObject* v) { -#if CYTHON_COMPILING_IN_CPYTHON - PyObject* none = _PyList_Extend((PyListObject*)L, v); - if (unlikely(!none)) - return -1; - Py_DECREF(none); - return 0; -#else - return PyList_SetSlice(L, PY_SSIZE_T_MAX, PY_SSIZE_T_MAX, v); -#endif -} - -/* ListAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_PyList_Append(L,x) PyList_Append(L,x) -#endif - -/* None.proto */ -static CYTHON_INLINE long __Pyx_div_long(long, long); - -/* ImportFrom.proto */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); - -/* HasAttr.proto */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *, PyObject *); - -/* PyObject_GenericGetAttrNoDict.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr -#endif - -/* PyObject_GenericGetAttr.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttr PyObject_GenericGetAttr -#endif - -/* SetVTable.proto */ -static int __Pyx_SetVtable(PyObject *dict, void *vtable); - -/* PyObjectGetAttrStrNoError.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name); - -/* SetupReduce.proto */ -static int __Pyx_setup_reduce(PyObject* type_obj); - -/* CLineInTraceback.proto */ -#ifdef CYTHON_CLINE_IN_TRACEBACK -#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) -#else -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); -#endif - -/* CodeObjectCache.proto */ -typedef struct { - PyCodeObject* code_object; - int code_line; -} __Pyx_CodeObjectCacheEntry; -struct __Pyx_CodeObjectCache { - int count; - int max_count; - __Pyx_CodeObjectCacheEntry* entries; -}; -static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); -static PyCodeObject *__pyx_find_code_object(int code_line); -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); - -/* AddTraceback.proto */ -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename); - -#if PY_MAJOR_VERSION < 3 - static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags); - static void __Pyx_ReleaseBuffer(Py_buffer *view); -#else - #define __Pyx_GetBuffer PyObject_GetBuffer - #define __Pyx_ReleaseBuffer PyBuffer_Release -#endif - - -/* BufferStructDeclare.proto */ -typedef struct { - Py_ssize_t shape, strides, suboffsets; -} __Pyx_Buf_DimInfo; -typedef struct { - size_t refcount; - Py_buffer pybuffer; -} __Pyx_Buffer; -typedef struct { - __Pyx_Buffer *rcbuffer; - char *data; - __Pyx_Buf_DimInfo diminfo[8]; -} __Pyx_LocalBuf_ND; - -/* MemviewSliceIsContig.proto */ -static int __pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim); - -/* OverlappingSlices.proto */ -static int __pyx_slices_overlap(__Pyx_memviewslice *slice1, - __Pyx_memviewslice *slice2, - int ndim, size_t itemsize); - -/* Capsule.proto */ -static CYTHON_INLINE PyObject *__pyx_capsule_create(void *p, const char *sig); - -/* IsLittleEndian.proto */ -static CYTHON_INLINE int __Pyx_Is_Little_Endian(void); - -/* BufferFormatCheck.proto */ -static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts); -static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, - __Pyx_BufFmt_StackElem* stack, - __Pyx_TypeInfo* type); - -/* TypeInfoCompare.proto */ -static int __pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b); - -/* MemviewSliceValidateAndInit.proto */ -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *, int writable_flag); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *, int writable_flag); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *, int writable_flag); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); - -/* MemviewSliceCopyTemplate.proto */ -static __Pyx_memviewslice -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object); - -/* CIntFromPy.proto */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); - -/* CIntFromPy.proto */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); - -/* CIntFromPy.proto */ -static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *); - -/* CheckBinaryVersion.proto */ -static int __Pyx_check_binary_version(void); - -/* InitStrings.proto */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); - -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self); /* proto*/ -static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto*/ -static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj); /* proto*/ -static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src); /* proto*/ -static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/ -static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/ -static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/ - -/* Module declarations from 'cython.view' */ - -/* Module declarations from 'cython' */ - -/* Module declarations from 'monotonic_align.core' */ -static PyTypeObject *__pyx_array_type = 0; -static PyTypeObject *__pyx_MemviewEnum_type = 0; -static PyTypeObject *__pyx_memoryview_type = 0; -static PyTypeObject *__pyx_memoryviewslice_type = 0; -static PyObject *generic = 0; -static PyObject *strided = 0; -static PyObject *indirect = 0; -static PyObject *contiguous = 0; -static PyObject *indirect_contiguous = 0; -static int __pyx_memoryview_thread_locks_used; -static PyThread_type_lock __pyx_memoryview_thread_locks[8]; -static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice, __Pyx_memviewslice, int, int, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args); /*proto*/ -static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, int __pyx_skip_dispatch); /*proto*/ -static struct __pyx_array_obj *__pyx_array_new(PyObject *, Py_ssize_t, char *, char *, char *); /*proto*/ -static void *__pyx_align_pointer(void *, size_t); /*proto*/ -static PyObject *__pyx_memoryview_new(PyObject *, int, int, __Pyx_TypeInfo *); /*proto*/ -static CYTHON_INLINE int __pyx_memoryview_check(PyObject *); /*proto*/ -static PyObject *_unellipsify(PyObject *, int); /*proto*/ -static PyObject *assert_direct_dimensions(Py_ssize_t *, int); /*proto*/ -static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *, PyObject *); /*proto*/ -static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int, int); /*proto*/ -static char *__pyx_pybuffer_index(Py_buffer *, char *, Py_ssize_t, Py_ssize_t); /*proto*/ -static int __pyx_memslice_transpose(__Pyx_memviewslice *); /*proto*/ -static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice, int, PyObject *(*)(char *), int (*)(char *, PyObject *), int); /*proto*/ -static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *); /*proto*/ -static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static Py_ssize_t abs_py_ssize_t(Py_ssize_t); /*proto*/ -static char __pyx_get_best_slice_order(__Pyx_memviewslice *, int); /*proto*/ -static void _copy_strided_to_strided(char *, Py_ssize_t *, char *, Py_ssize_t *, Py_ssize_t *, Py_ssize_t *, int, size_t); /*proto*/ -static void copy_strided_to_strided(__Pyx_memviewslice *, __Pyx_memviewslice *, int, size_t); /*proto*/ -static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *, int); /*proto*/ -static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *, Py_ssize_t *, Py_ssize_t, int, char); /*proto*/ -static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *, __Pyx_memviewslice *, char, int); /*proto*/ -static int __pyx_memoryview_err_extents(int, Py_ssize_t, Py_ssize_t); /*proto*/ -static int __pyx_memoryview_err_dim(PyObject *, char *, int); /*proto*/ -static int __pyx_memoryview_err(PyObject *, char *); /*proto*/ -static int __pyx_memoryview_copy_contents(__Pyx_memviewslice, __Pyx_memviewslice, int, int, int); /*proto*/ -static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *, int, int); /*proto*/ -static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *, int, int, int); /*proto*/ -static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/ -static void __pyx_memoryview_refcount_objects_in_slice(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/ -static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *, int, size_t, void *, int); /*proto*/ -static void __pyx_memoryview__slice_assign_scalar(char *, Py_ssize_t *, Py_ssize_t *, int, size_t, void *); /*proto*/ -static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *, PyObject *); /*proto*/ -static __Pyx_TypeInfo __Pyx_TypeInfo_int = { "int", NULL, sizeof(int), { 0 }, 0, IS_UNSIGNED(int) ? 'U' : 'I', IS_UNSIGNED(int), 0 }; -static __Pyx_TypeInfo __Pyx_TypeInfo_float = { "float", NULL, sizeof(float), { 0 }, 0, 'R', 0, 0 }; -#define __Pyx_MODULE_NAME "monotonic_align.core" -extern int __pyx_module_is_main_monotonic_align__core; -int __pyx_module_is_main_monotonic_align__core = 0; - -/* Implementation of 'monotonic_align.core' */ -static PyObject *__pyx_builtin_range; -static PyObject *__pyx_builtin_ValueError; -static PyObject *__pyx_builtin_MemoryError; -static PyObject *__pyx_builtin_enumerate; -static PyObject *__pyx_builtin_TypeError; -static PyObject *__pyx_builtin_Ellipsis; -static PyObject *__pyx_builtin_id; -static PyObject *__pyx_builtin_IndexError; -static const char __pyx_k_O[] = "O"; -static const char __pyx_k_c[] = "c"; -static const char __pyx_k_id[] = "id"; -static const char __pyx_k_new[] = "__new__"; -static const char __pyx_k_obj[] = "obj"; -static const char __pyx_k_base[] = "base"; -static const char __pyx_k_dict[] = "__dict__"; -static const char __pyx_k_main[] = "__main__"; -static const char __pyx_k_mode[] = "mode"; -static const char __pyx_k_name[] = "name"; -static const char __pyx_k_ndim[] = "ndim"; -static const char __pyx_k_pack[] = "pack"; -static const char __pyx_k_size[] = "size"; -static const char __pyx_k_step[] = "step"; -static const char __pyx_k_stop[] = "stop"; -static const char __pyx_k_t_xs[] = "t_xs"; -static const char __pyx_k_t_ys[] = "t_ys"; -static const char __pyx_k_test[] = "__test__"; -static const char __pyx_k_ASCII[] = "ASCII"; -static const char __pyx_k_class[] = "__class__"; -static const char __pyx_k_error[] = "error"; -static const char __pyx_k_flags[] = "flags"; -static const char __pyx_k_paths[] = "paths"; -static const char __pyx_k_range[] = "range"; -static const char __pyx_k_shape[] = "shape"; -static const char __pyx_k_start[] = "start"; -static const char __pyx_k_encode[] = "encode"; -static const char __pyx_k_format[] = "format"; -static const char __pyx_k_import[] = "__import__"; -static const char __pyx_k_name_2[] = "__name__"; -static const char __pyx_k_pickle[] = "pickle"; -static const char __pyx_k_reduce[] = "__reduce__"; -static const char __pyx_k_struct[] = "struct"; -static const char __pyx_k_unpack[] = "unpack"; -static const char __pyx_k_update[] = "update"; -static const char __pyx_k_values[] = "values"; -static const char __pyx_k_fortran[] = "fortran"; -static const char __pyx_k_memview[] = "memview"; -static const char __pyx_k_Ellipsis[] = "Ellipsis"; -static const char __pyx_k_getstate[] = "__getstate__"; -static const char __pyx_k_itemsize[] = "itemsize"; -static const char __pyx_k_pyx_type[] = "__pyx_type"; -static const char __pyx_k_setstate[] = "__setstate__"; -static const char __pyx_k_TypeError[] = "TypeError"; -static const char __pyx_k_enumerate[] = "enumerate"; -static const char __pyx_k_pyx_state[] = "__pyx_state"; -static const char __pyx_k_reduce_ex[] = "__reduce_ex__"; -static const char __pyx_k_IndexError[] = "IndexError"; -static const char __pyx_k_ValueError[] = "ValueError"; -static const char __pyx_k_pyx_result[] = "__pyx_result"; -static const char __pyx_k_pyx_vtable[] = "__pyx_vtable__"; -static const char __pyx_k_MemoryError[] = "MemoryError"; -static const char __pyx_k_PickleError[] = "PickleError"; -static const char __pyx_k_pyx_checksum[] = "__pyx_checksum"; -static const char __pyx_k_stringsource[] = "stringsource"; -static const char __pyx_k_pyx_getbuffer[] = "__pyx_getbuffer"; -static const char __pyx_k_reduce_cython[] = "__reduce_cython__"; -static const char __pyx_k_View_MemoryView[] = "View.MemoryView"; -static const char __pyx_k_allocate_buffer[] = "allocate_buffer"; -static const char __pyx_k_dtype_is_object[] = "dtype_is_object"; -static const char __pyx_k_pyx_PickleError[] = "__pyx_PickleError"; -static const char __pyx_k_setstate_cython[] = "__setstate_cython__"; -static const char __pyx_k_pyx_unpickle_Enum[] = "__pyx_unpickle_Enum"; -static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; -static const char __pyx_k_strided_and_direct[] = ""; -static const char __pyx_k_strided_and_indirect[] = ""; -static const char __pyx_k_contiguous_and_direct[] = ""; -static const char __pyx_k_MemoryView_of_r_object[] = ""; -static const char __pyx_k_MemoryView_of_r_at_0x_x[] = ""; -static const char __pyx_k_contiguous_and_indirect[] = ""; -static const char __pyx_k_Cannot_index_with_type_s[] = "Cannot index with type '%s'"; -static const char __pyx_k_Invalid_shape_in_axis_d_d[] = "Invalid shape in axis %d: %d."; -static const char __pyx_k_itemsize_0_for_cython_array[] = "itemsize <= 0 for cython.array"; -static const char __pyx_k_unable_to_allocate_array_data[] = "unable to allocate array data."; -static const char __pyx_k_strided_and_direct_or_indirect[] = ""; -static const char __pyx_k_Buffer_view_does_not_expose_stri[] = "Buffer view does not expose strides"; -static const char __pyx_k_Can_only_create_a_buffer_that_is[] = "Can only create a buffer that is contiguous in memory."; -static const char __pyx_k_Cannot_assign_to_read_only_memor[] = "Cannot assign to read-only memoryview"; -static const char __pyx_k_Cannot_create_writable_memory_vi[] = "Cannot create writable memory view from read-only memoryview"; -static const char __pyx_k_Empty_shape_tuple_for_cython_arr[] = "Empty shape tuple for cython.array"; -static const char __pyx_k_Incompatible_checksums_s_vs_0xb0[] = "Incompatible checksums (%s vs 0xb068931 = (name))"; -static const char __pyx_k_Indirect_dimensions_not_supporte[] = "Indirect dimensions not supported"; -static const char __pyx_k_Invalid_mode_expected_c_or_fortr[] = "Invalid mode, expected 'c' or 'fortran', got %s"; -static const char __pyx_k_Out_of_bounds_on_buffer_access_a[] = "Out of bounds on buffer access (axis %d)"; -static const char __pyx_k_Unable_to_convert_item_to_object[] = "Unable to convert item to object"; -static const char __pyx_k_got_differing_extents_in_dimensi[] = "got differing extents in dimension %d (got %d and %d)"; -static const char __pyx_k_no_default___reduce___due_to_non[] = "no default __reduce__ due to non-trivial __cinit__"; -static const char __pyx_k_unable_to_allocate_shape_and_str[] = "unable to allocate shape and strides."; -static PyObject *__pyx_n_s_ASCII; -static PyObject *__pyx_kp_s_Buffer_view_does_not_expose_stri; -static PyObject *__pyx_kp_s_Can_only_create_a_buffer_that_is; -static PyObject *__pyx_kp_s_Cannot_assign_to_read_only_memor; -static PyObject *__pyx_kp_s_Cannot_create_writable_memory_vi; -static PyObject *__pyx_kp_s_Cannot_index_with_type_s; -static PyObject *__pyx_n_s_Ellipsis; -static PyObject *__pyx_kp_s_Empty_shape_tuple_for_cython_arr; -static PyObject *__pyx_kp_s_Incompatible_checksums_s_vs_0xb0; -static PyObject *__pyx_n_s_IndexError; -static PyObject *__pyx_kp_s_Indirect_dimensions_not_supporte; -static PyObject *__pyx_kp_s_Invalid_mode_expected_c_or_fortr; -static PyObject *__pyx_kp_s_Invalid_shape_in_axis_d_d; -static PyObject *__pyx_n_s_MemoryError; -static PyObject *__pyx_kp_s_MemoryView_of_r_at_0x_x; -static PyObject *__pyx_kp_s_MemoryView_of_r_object; -static PyObject *__pyx_n_b_O; -static PyObject *__pyx_kp_s_Out_of_bounds_on_buffer_access_a; -static PyObject *__pyx_n_s_PickleError; -static PyObject *__pyx_n_s_TypeError; -static PyObject *__pyx_kp_s_Unable_to_convert_item_to_object; -static PyObject *__pyx_n_s_ValueError; -static PyObject *__pyx_n_s_View_MemoryView; -static PyObject *__pyx_n_s_allocate_buffer; -static PyObject *__pyx_n_s_base; -static PyObject *__pyx_n_s_c; -static PyObject *__pyx_n_u_c; -static PyObject *__pyx_n_s_class; -static PyObject *__pyx_n_s_cline_in_traceback; -static PyObject *__pyx_kp_s_contiguous_and_direct; -static PyObject *__pyx_kp_s_contiguous_and_indirect; -static PyObject *__pyx_n_s_dict; -static PyObject *__pyx_n_s_dtype_is_object; -static PyObject *__pyx_n_s_encode; -static PyObject *__pyx_n_s_enumerate; -static PyObject *__pyx_n_s_error; -static PyObject *__pyx_n_s_flags; -static PyObject *__pyx_n_s_format; -static PyObject *__pyx_n_s_fortran; -static PyObject *__pyx_n_u_fortran; -static PyObject *__pyx_n_s_getstate; -static PyObject *__pyx_kp_s_got_differing_extents_in_dimensi; -static PyObject *__pyx_n_s_id; -static PyObject *__pyx_n_s_import; -static PyObject *__pyx_n_s_itemsize; -static PyObject *__pyx_kp_s_itemsize_0_for_cython_array; -static PyObject *__pyx_n_s_main; -static PyObject *__pyx_n_s_memview; -static PyObject *__pyx_n_s_mode; -static PyObject *__pyx_n_s_name; -static PyObject *__pyx_n_s_name_2; -static PyObject *__pyx_n_s_ndim; -static PyObject *__pyx_n_s_new; -static PyObject *__pyx_kp_s_no_default___reduce___due_to_non; -static PyObject *__pyx_n_s_obj; -static PyObject *__pyx_n_s_pack; -static PyObject *__pyx_n_s_paths; -static PyObject *__pyx_n_s_pickle; -static PyObject *__pyx_n_s_pyx_PickleError; -static PyObject *__pyx_n_s_pyx_checksum; -static PyObject *__pyx_n_s_pyx_getbuffer; -static PyObject *__pyx_n_s_pyx_result; -static PyObject *__pyx_n_s_pyx_state; -static PyObject *__pyx_n_s_pyx_type; -static PyObject *__pyx_n_s_pyx_unpickle_Enum; -static PyObject *__pyx_n_s_pyx_vtable; -static PyObject *__pyx_n_s_range; -static PyObject *__pyx_n_s_reduce; -static PyObject *__pyx_n_s_reduce_cython; -static PyObject *__pyx_n_s_reduce_ex; -static PyObject *__pyx_n_s_setstate; -static PyObject *__pyx_n_s_setstate_cython; -static PyObject *__pyx_n_s_shape; -static PyObject *__pyx_n_s_size; -static PyObject *__pyx_n_s_start; -static PyObject *__pyx_n_s_step; -static PyObject *__pyx_n_s_stop; -static PyObject *__pyx_kp_s_strided_and_direct; -static PyObject *__pyx_kp_s_strided_and_direct_or_indirect; -static PyObject *__pyx_kp_s_strided_and_indirect; -static PyObject *__pyx_kp_s_stringsource; -static PyObject *__pyx_n_s_struct; -static PyObject *__pyx_n_s_t_xs; -static PyObject *__pyx_n_s_t_ys; -static PyObject *__pyx_n_s_test; -static PyObject *__pyx_kp_s_unable_to_allocate_array_data; -static PyObject *__pyx_kp_s_unable_to_allocate_shape_and_str; -static PyObject *__pyx_n_s_unpack; -static PyObject *__pyx_n_s_update; -static PyObject *__pyx_n_s_values; -static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ -static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr); /* proto */ -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /* proto */ -static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name); /* proto */ -static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object); /* proto */ -static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state); /* proto */ -static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_int_0; -static PyObject *__pyx_int_1; -static PyObject *__pyx_int_184977713; -static PyObject *__pyx_int_neg_1; -static float __pyx_k_; -static PyObject *__pyx_tuple__2; -static PyObject *__pyx_tuple__3; -static PyObject *__pyx_tuple__4; -static PyObject *__pyx_tuple__5; -static PyObject *__pyx_tuple__6; -static PyObject *__pyx_tuple__7; -static PyObject *__pyx_tuple__8; -static PyObject *__pyx_tuple__9; -static PyObject *__pyx_slice__16; -static PyObject *__pyx_tuple__10; -static PyObject *__pyx_tuple__11; -static PyObject *__pyx_tuple__12; -static PyObject *__pyx_tuple__13; -static PyObject *__pyx_tuple__14; -static PyObject *__pyx_tuple__15; -static PyObject *__pyx_tuple__17; -static PyObject *__pyx_tuple__18; -static PyObject *__pyx_tuple__19; -static PyObject *__pyx_tuple__20; -static PyObject *__pyx_tuple__21; -static PyObject *__pyx_tuple__22; -static PyObject *__pyx_tuple__23; -static PyObject *__pyx_tuple__24; -static PyObject *__pyx_tuple__25; -static PyObject *__pyx_codeobj__26; -/* Late includes */ - -/* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - -static void __pyx_f_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice __pyx_v_path, __Pyx_memviewslice __pyx_v_value, int __pyx_v_t_y, int __pyx_v_t_x, struct __pyx_opt_args_15monotonic_align_4core_maximum_path_each *__pyx_optional_args) { - float __pyx_v_max_neg_val = __pyx_k_; - int __pyx_v_x; - int __pyx_v_y; - float __pyx_v_v_prev; - float __pyx_v_v_cur; - int __pyx_v_index; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - long __pyx_t_4; - int __pyx_t_5; - long __pyx_t_6; - long __pyx_t_7; - int __pyx_t_8; - Py_ssize_t __pyx_t_9; - Py_ssize_t __pyx_t_10; - float __pyx_t_11; - float __pyx_t_12; - float __pyx_t_13; - int __pyx_t_14; - Py_ssize_t __pyx_t_15; - Py_ssize_t __pyx_t_16; - if (__pyx_optional_args) { - if (__pyx_optional_args->__pyx_n > 0) { - __pyx_v_max_neg_val = __pyx_optional_args->max_neg_val; - } - } - - /* "monotonic_align/core.pyx":13 - * cdef float v_cur - * cdef float tmp - * cdef int index = t_x - 1 # <<<<<<<<<<<<<< - * - * for y in range(t_y): - */ - __pyx_v_index = (__pyx_v_t_x - 1); - - /* "monotonic_align/core.pyx":15 - * cdef int index = t_x - 1 - * - * for y in range(t_y): # <<<<<<<<<<<<<< - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: - */ - __pyx_t_1 = __pyx_v_t_y; - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_y = __pyx_t_3; - - /* "monotonic_align/core.pyx":16 - * - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): # <<<<<<<<<<<<<< - * if x == y: - * v_cur = max_neg_val - */ - __pyx_t_4 = (__pyx_v_y + 1); - __pyx_t_5 = __pyx_v_t_x; - if (((__pyx_t_4 < __pyx_t_5) != 0)) { - __pyx_t_6 = __pyx_t_4; - } else { - __pyx_t_6 = __pyx_t_5; - } - __pyx_t_4 = __pyx_t_6; - __pyx_t_5 = ((__pyx_v_t_x + __pyx_v_y) - __pyx_v_t_y); - __pyx_t_6 = 0; - if (((__pyx_t_5 > __pyx_t_6) != 0)) { - __pyx_t_7 = __pyx_t_5; - } else { - __pyx_t_7 = __pyx_t_6; - } - __pyx_t_6 = __pyx_t_4; - for (__pyx_t_5 = __pyx_t_7; __pyx_t_5 < __pyx_t_6; __pyx_t_5+=1) { - __pyx_v_x = __pyx_t_5; - - /* "monotonic_align/core.pyx":17 - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: # <<<<<<<<<<<<<< - * v_cur = max_neg_val - * else: - */ - __pyx_t_8 = ((__pyx_v_x == __pyx_v_y) != 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":18 - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: - * v_cur = max_neg_val # <<<<<<<<<<<<<< - * else: - * v_cur = value[y-1, x] - */ - __pyx_v_v_cur = __pyx_v_max_neg_val; - - /* "monotonic_align/core.pyx":17 - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: # <<<<<<<<<<<<<< - * v_cur = max_neg_val - * else: - */ - goto __pyx_L7; - } - - /* "monotonic_align/core.pyx":20 - * v_cur = max_neg_val - * else: - * v_cur = value[y-1, x] # <<<<<<<<<<<<<< - * if x == 0: - * if y == 0: - */ - /*else*/ { - __pyx_t_9 = (__pyx_v_y - 1); - __pyx_t_10 = __pyx_v_x; - __pyx_v_v_cur = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))); - } - __pyx_L7:; - - /* "monotonic_align/core.pyx":21 - * else: - * v_cur = value[y-1, x] - * if x == 0: # <<<<<<<<<<<<<< - * if y == 0: - * v_prev = 0. - */ - __pyx_t_8 = ((__pyx_v_x == 0) != 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":22 - * v_cur = value[y-1, x] - * if x == 0: - * if y == 0: # <<<<<<<<<<<<<< - * v_prev = 0. - * else: - */ - __pyx_t_8 = ((__pyx_v_y == 0) != 0); - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":23 - * if x == 0: - * if y == 0: - * v_prev = 0. # <<<<<<<<<<<<<< - * else: - * v_prev = max_neg_val - */ - __pyx_v_v_prev = 0.; - - /* "monotonic_align/core.pyx":22 - * v_cur = value[y-1, x] - * if x == 0: - * if y == 0: # <<<<<<<<<<<<<< - * v_prev = 0. - * else: - */ - goto __pyx_L9; - } - - /* "monotonic_align/core.pyx":25 - * v_prev = 0. - * else: - * v_prev = max_neg_val # <<<<<<<<<<<<<< - * else: - * v_prev = value[y-1, x-1] - */ - /*else*/ { - __pyx_v_v_prev = __pyx_v_max_neg_val; - } - __pyx_L9:; - - /* "monotonic_align/core.pyx":21 - * else: - * v_cur = value[y-1, x] - * if x == 0: # <<<<<<<<<<<<<< - * if y == 0: - * v_prev = 0. - */ - goto __pyx_L8; - } - - /* "monotonic_align/core.pyx":27 - * v_prev = max_neg_val - * else: - * v_prev = value[y-1, x-1] # <<<<<<<<<<<<<< - * value[y, x] += max(v_prev, v_cur) - * - */ - /*else*/ { - __pyx_t_10 = (__pyx_v_y - 1); - __pyx_t_9 = (__pyx_v_x - 1); - __pyx_v_v_prev = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_10 * __pyx_v_value.strides[0]) )) + __pyx_t_9)) ))); - } - __pyx_L8:; - - /* "monotonic_align/core.pyx":28 - * else: - * v_prev = value[y-1, x-1] - * value[y, x] += max(v_prev, v_cur) # <<<<<<<<<<<<<< - * - * for y in range(t_y - 1, -1, -1): - */ - __pyx_t_11 = __pyx_v_v_cur; - __pyx_t_12 = __pyx_v_v_prev; - if (((__pyx_t_11 > __pyx_t_12) != 0)) { - __pyx_t_13 = __pyx_t_11; - } else { - __pyx_t_13 = __pyx_t_12; - } - __pyx_t_9 = __pyx_v_y; - __pyx_t_10 = __pyx_v_x; - *((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) )) += __pyx_t_13; - } - } - - /* "monotonic_align/core.pyx":30 - * value[y, x] += max(v_prev, v_cur) - * - * for y in range(t_y - 1, -1, -1): # <<<<<<<<<<<<<< - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - */ - for (__pyx_t_1 = (__pyx_v_t_y - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_y = __pyx_t_1; - - /* "monotonic_align/core.pyx":31 - * - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 # <<<<<<<<<<<<<< - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - * index = index - 1 - */ - __pyx_t_10 = __pyx_v_y; - __pyx_t_9 = __pyx_v_index; - *((int *) ( /* dim=1 */ ((char *) (((int *) ( /* dim=0 */ (__pyx_v_path.data + __pyx_t_10 * __pyx_v_path.strides[0]) )) + __pyx_t_9)) )) = 1; - - /* "monotonic_align/core.pyx":32 - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<< - * index = index - 1 - * - */ - __pyx_t_14 = ((__pyx_v_index != 0) != 0); - if (__pyx_t_14) { - } else { - __pyx_t_8 = __pyx_t_14; - goto __pyx_L13_bool_binop_done; - } - __pyx_t_14 = ((__pyx_v_index == __pyx_v_y) != 0); - if (!__pyx_t_14) { - } else { - __pyx_t_8 = __pyx_t_14; - goto __pyx_L13_bool_binop_done; - } - __pyx_t_9 = (__pyx_v_y - 1); - __pyx_t_10 = __pyx_v_index; - __pyx_t_15 = (__pyx_v_y - 1); - __pyx_t_16 = (__pyx_v_index - 1); - __pyx_t_14 = (((*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))) < (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_15 * __pyx_v_value.strides[0]) )) + __pyx_t_16)) )))) != 0); - __pyx_t_8 = __pyx_t_14; - __pyx_L13_bool_binop_done:; - if (__pyx_t_8) { - - /* "monotonic_align/core.pyx":33 - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - * index = index - 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_index = (__pyx_v_index - 1); - - /* "monotonic_align/core.pyx":32 - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<< - * index = index - 1 - * - */ - } - } - - /* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - - /* function exit code */ -} - -/* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static void __pyx_f_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs, CYTHON_UNUSED int __pyx_skip_dispatch) { - CYTHON_UNUSED int __pyx_v_b; - int __pyx_v_i; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - __Pyx_memviewslice __pyx_t_4 = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_t_5 = { 0, 0, { 0 }, { 0 }, { 0 } }; - Py_ssize_t __pyx_t_6; - Py_ssize_t __pyx_t_7; - - /* "monotonic_align/core.pyx":39 - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: - * cdef int b = paths.shape[0] # <<<<<<<<<<<<<< - * cdef int i - * for i in prange(b, nogil=True): - */ - __pyx_v_b = (__pyx_v_paths.shape[0]); - - /* "monotonic_align/core.pyx":41 - * cdef int b = paths.shape[0] - * cdef int i - * for i in prange(b, nogil=True): # <<<<<<<<<<<<<< - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) - */ - { - #ifdef WITH_THREAD - PyThreadState *_save; - Py_UNBLOCK_THREADS - __Pyx_FastGIL_Remember(); - #endif - /*try:*/ { - __pyx_t_1 = __pyx_v_b; - if ((1 == 0)) abort(); - { - #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))))) - #undef likely - #undef unlikely - #define likely(x) (x) - #define unlikely(x) (x) - #endif - __pyx_t_3 = (__pyx_t_1 - 0 + 1 - 1/abs(1)) / 1; - if (__pyx_t_3 > 0) - { - #ifdef _OPENMP - #pragma omp parallel private(__pyx_t_6, __pyx_t_7) firstprivate(__pyx_t_4, __pyx_t_5) - #endif /* _OPENMP */ - { - #ifdef _OPENMP - #pragma omp for firstprivate(__pyx_v_i) lastprivate(__pyx_v_i) - #endif /* _OPENMP */ - for (__pyx_t_2 = 0; __pyx_t_2 < __pyx_t_3; __pyx_t_2++){ - { - __pyx_v_i = (int)(0 + 1 * __pyx_t_2); - - /* "monotonic_align/core.pyx":42 - * cdef int i - * for i in prange(b, nogil=True): - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) # <<<<<<<<<<<<<< - */ - __pyx_t_4.data = __pyx_v_paths.data; - __pyx_t_4.memview = __pyx_v_paths.memview; - __PYX_INC_MEMVIEW(&__pyx_t_4, 0); - { - Py_ssize_t __pyx_tmp_idx = __pyx_v_i; - Py_ssize_t __pyx_tmp_stride = __pyx_v_paths.strides[0]; - __pyx_t_4.data += __pyx_tmp_idx * __pyx_tmp_stride; -} - -__pyx_t_4.shape[0] = __pyx_v_paths.shape[1]; -__pyx_t_4.strides[0] = __pyx_v_paths.strides[1]; - __pyx_t_4.suboffsets[0] = -1; - -__pyx_t_4.shape[1] = __pyx_v_paths.shape[2]; -__pyx_t_4.strides[1] = __pyx_v_paths.strides[2]; - __pyx_t_4.suboffsets[1] = -1; - -__pyx_t_5.data = __pyx_v_values.data; - __pyx_t_5.memview = __pyx_v_values.memview; - __PYX_INC_MEMVIEW(&__pyx_t_5, 0); - { - Py_ssize_t __pyx_tmp_idx = __pyx_v_i; - Py_ssize_t __pyx_tmp_stride = __pyx_v_values.strides[0]; - __pyx_t_5.data += __pyx_tmp_idx * __pyx_tmp_stride; -} - -__pyx_t_5.shape[0] = __pyx_v_values.shape[1]; -__pyx_t_5.strides[0] = __pyx_v_values.strides[1]; - __pyx_t_5.suboffsets[0] = -1; - -__pyx_t_5.shape[1] = __pyx_v_values.shape[2]; -__pyx_t_5.strides[1] = __pyx_v_values.strides[2]; - __pyx_t_5.suboffsets[1] = -1; - -__pyx_t_6 = __pyx_v_i; - __pyx_t_7 = __pyx_v_i; - __pyx_f_15monotonic_align_4core_maximum_path_each(__pyx_t_4, __pyx_t_5, (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_ys.data) + __pyx_t_6)) ))), (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_xs.data) + __pyx_t_7)) ))), NULL); - __PYX_XDEC_MEMVIEW(&__pyx_t_4, 0); - __pyx_t_4.memview = NULL; - __pyx_t_4.data = NULL; - __PYX_XDEC_MEMVIEW(&__pyx_t_5, 0); - __pyx_t_5.memview = NULL; - __pyx_t_5.data = NULL; - } - } - } - } - } - #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))))) - #undef likely - #undef unlikely - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) - #endif - } - - /* "monotonic_align/core.pyx":41 - * cdef int b = paths.shape[0] - * cdef int i - * for i in prange(b, nogil=True): # <<<<<<<<<<<<<< - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) - */ - /*finally:*/ { - /*normal exit:*/{ - #ifdef WITH_THREAD - __Pyx_FastGIL_Forget(); - Py_BLOCK_THREADS - #endif - goto __pyx_L5; - } - __pyx_L5:; - } - } - - /* "monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - - /* function exit code */ -} - -/* Python wrapper */ -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static PyObject *__pyx_pw_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - __Pyx_memviewslice __pyx_v_paths = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_values = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_t_ys = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_t_xs = { 0, 0, { 0 }, { 0 }, { 0 } }; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("maximum_path_c (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_paths,&__pyx_n_s_values,&__pyx_n_s_t_ys,&__pyx_n_s_t_xs,0}; - PyObject* values[4] = {0,0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_paths)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_values)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 1); __PYX_ERR(0, 38, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_ys)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 2); __PYX_ERR(0, 38, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_xs)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 3); __PYX_ERR(0, 38, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "maximum_path_c") < 0)) __PYX_ERR(0, 38, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 4) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - } - __pyx_v_paths = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(values[0], PyBUF_WRITABLE); if (unlikely(!__pyx_v_paths.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_values = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(values[1], PyBUF_WRITABLE); if (unlikely(!__pyx_v_values.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_t_ys = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[2], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_ys.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_t_xs = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[3], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_xs.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_15monotonic_align_4core_maximum_path_c(__pyx_self, __pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("maximum_path_c", 0); - __Pyx_XDECREF(__pyx_r); - if (unlikely(!__pyx_v_paths.memview)) { __Pyx_RaiseUnboundLocalError("paths"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_values.memview)) { __Pyx_RaiseUnboundLocalError("values"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_t_ys.memview)) { __Pyx_RaiseUnboundLocalError("t_ys"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_t_xs.memview)) { __Pyx_RaiseUnboundLocalError("t_xs"); __PYX_ERR(0, 38, __pyx_L1_error) } - __pyx_t_1 = __Pyx_void_to_None(__pyx_f_15monotonic_align_4core_maximum_path_c(__pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs, 0)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __PYX_XDEC_MEMVIEW(&__pyx_v_paths, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_values, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_t_ys, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_t_xs, 1); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":122 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - -/* Python wrapper */ -static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_shape = 0; - Py_ssize_t __pyx_v_itemsize; - PyObject *__pyx_v_format = 0; - PyObject *__pyx_v_mode = 0; - int __pyx_v_allocate_buffer; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_shape,&__pyx_n_s_itemsize,&__pyx_n_s_format,&__pyx_n_s_mode,&__pyx_n_s_allocate_buffer,0}; - PyObject* values[5] = {0,0,0,0,0}; - values[3] = ((PyObject *)__pyx_n_s_c); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_shape)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_itemsize)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 1); __PYX_ERR(1, 122, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_format)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 2); __PYX_ERR(1, 122, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_mode); - if (value) { values[3] = value; kw_args--; } - } - CYTHON_FALLTHROUGH; - case 4: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_allocate_buffer); - if (value) { values[4] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 122, __pyx_L3_error) - } - } else { - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_shape = ((PyObject*)values[0]); - __pyx_v_itemsize = __Pyx_PyIndex_AsSsize_t(values[1]); if (unlikely((__pyx_v_itemsize == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 122, __pyx_L3_error) - __pyx_v_format = values[2]; - __pyx_v_mode = values[3]; - if (values[4]) { - __pyx_v_allocate_buffer = __Pyx_PyObject_IsTrue(values[4]); if (unlikely((__pyx_v_allocate_buffer == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 123, __pyx_L3_error) - } else { - - /* "View.MemoryView":123 - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, - * mode="c", bint allocate_buffer=True): # <<<<<<<<<<<<<< - * - * cdef int idx - */ - __pyx_v_allocate_buffer = ((int)1); - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 122, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_shape), (&PyTuple_Type), 1, "shape", 1))) __PYX_ERR(1, 122, __pyx_L1_error) - if (unlikely(((PyObject *)__pyx_v_format) == Py_None)) { - PyErr_Format(PyExc_TypeError, "Argument '%.200s' must not be None", "format"); __PYX_ERR(1, 122, __pyx_L1_error) - } - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(((struct __pyx_array_obj *)__pyx_v_self), __pyx_v_shape, __pyx_v_itemsize, __pyx_v_format, __pyx_v_mode, __pyx_v_allocate_buffer); - - /* "View.MemoryView":122 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - - /* function exit code */ - goto __pyx_L0; - __pyx_L1_error:; - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer) { - int __pyx_v_idx; - Py_ssize_t __pyx_v_i; - Py_ssize_t __pyx_v_dim; - PyObject **__pyx_v_p; - char __pyx_v_order; - int __pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - char *__pyx_t_7; - int __pyx_t_8; - Py_ssize_t __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - Py_ssize_t __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__cinit__", 0); - __Pyx_INCREF(__pyx_v_format); - - /* "View.MemoryView":129 - * cdef PyObject **p - * - * self.ndim = len(shape) # <<<<<<<<<<<<<< - * self.itemsize = itemsize - * - */ - if (unlikely(__pyx_v_shape == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 129, __pyx_L1_error) - } - __pyx_t_1 = PyTuple_GET_SIZE(__pyx_v_shape); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(1, 129, __pyx_L1_error) - __pyx_v_self->ndim = ((int)__pyx_t_1); - - /* "View.MemoryView":130 - * - * self.ndim = len(shape) - * self.itemsize = itemsize # <<<<<<<<<<<<<< - * - * if not self.ndim: - */ - __pyx_v_self->itemsize = __pyx_v_itemsize; - - /* "View.MemoryView":132 - * self.itemsize = itemsize - * - * if not self.ndim: # <<<<<<<<<<<<<< - * raise ValueError("Empty shape tuple for cython.array") - * - */ - __pyx_t_2 = ((!(__pyx_v_self->ndim != 0)) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":133 - * - * if not self.ndim: - * raise ValueError("Empty shape tuple for cython.array") # <<<<<<<<<<<<<< - * - * if itemsize <= 0: - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 133, __pyx_L1_error) - - /* "View.MemoryView":132 - * self.itemsize = itemsize - * - * if not self.ndim: # <<<<<<<<<<<<<< - * raise ValueError("Empty shape tuple for cython.array") - * - */ - } - - /* "View.MemoryView":135 - * raise ValueError("Empty shape tuple for cython.array") - * - * if itemsize <= 0: # <<<<<<<<<<<<<< - * raise ValueError("itemsize <= 0 for cython.array") - * - */ - __pyx_t_2 = ((__pyx_v_itemsize <= 0) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":136 - * - * if itemsize <= 0: - * raise ValueError("itemsize <= 0 for cython.array") # <<<<<<<<<<<<<< - * - * if not isinstance(format, bytes): - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 136, __pyx_L1_error) - - /* "View.MemoryView":135 - * raise ValueError("Empty shape tuple for cython.array") - * - * if itemsize <= 0: # <<<<<<<<<<<<<< - * raise ValueError("itemsize <= 0 for cython.array") - * - */ - } - - /* "View.MemoryView":138 - * raise ValueError("itemsize <= 0 for cython.array") - * - * if not isinstance(format, bytes): # <<<<<<<<<<<<<< - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - */ - __pyx_t_2 = PyBytes_Check(__pyx_v_format); - __pyx_t_4 = ((!(__pyx_t_2 != 0)) != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":139 - * - * if not isinstance(format, bytes): - * format = format.encode('ASCII') # <<<<<<<<<<<<<< - * self._format = format # keep a reference to the byte string - * self.format = self._format - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_format, __pyx_n_s_encode); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 139, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - } - } - __pyx_t_3 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_5, __pyx_t_6, __pyx_n_s_ASCII) : __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_n_s_ASCII); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 139, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF_SET(__pyx_v_format, __pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":138 - * raise ValueError("itemsize <= 0 for cython.array") - * - * if not isinstance(format, bytes): # <<<<<<<<<<<<<< - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - */ - } - - /* "View.MemoryView":140 - * if not isinstance(format, bytes): - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string # <<<<<<<<<<<<<< - * self.format = self._format - * - */ - if (!(likely(PyBytes_CheckExact(__pyx_v_format))||((__pyx_v_format) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_v_format)->tp_name), 0))) __PYX_ERR(1, 140, __pyx_L1_error) - __pyx_t_3 = __pyx_v_format; - __Pyx_INCREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __Pyx_GOTREF(__pyx_v_self->_format); - __Pyx_DECREF(__pyx_v_self->_format); - __pyx_v_self->_format = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":141 - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - * self.format = self._format # <<<<<<<<<<<<<< - * - * - */ - if (unlikely(__pyx_v_self->_format == Py_None)) { - PyErr_SetString(PyExc_TypeError, "expected bytes, NoneType found"); - __PYX_ERR(1, 141, __pyx_L1_error) - } - __pyx_t_7 = __Pyx_PyBytes_AsWritableString(__pyx_v_self->_format); if (unlikely((!__pyx_t_7) && PyErr_Occurred())) __PYX_ERR(1, 141, __pyx_L1_error) - __pyx_v_self->format = __pyx_t_7; - - /* "View.MemoryView":144 - * - * - * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) # <<<<<<<<<<<<<< - * self._strides = self._shape + self.ndim - * - */ - __pyx_v_self->_shape = ((Py_ssize_t *)PyObject_Malloc((((sizeof(Py_ssize_t)) * __pyx_v_self->ndim) * 2))); - - /* "View.MemoryView":145 - * - * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) - * self._strides = self._shape + self.ndim # <<<<<<<<<<<<<< - * - * if not self._shape: - */ - __pyx_v_self->_strides = (__pyx_v_self->_shape + __pyx_v_self->ndim); - - /* "View.MemoryView":147 - * self._strides = self._shape + self.ndim - * - * if not self._shape: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate shape and strides.") - * - */ - __pyx_t_4 = ((!(__pyx_v_self->_shape != 0)) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":148 - * - * if not self._shape: - * raise MemoryError("unable to allocate shape and strides.") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 148, __pyx_L1_error) - - /* "View.MemoryView":147 - * self._strides = self._shape + self.ndim - * - * if not self._shape: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate shape and strides.") - * - */ - } - - /* "View.MemoryView":151 - * - * - * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<< - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - */ - __pyx_t_8 = 0; - __pyx_t_3 = __pyx_v_shape; __Pyx_INCREF(__pyx_t_3); __pyx_t_1 = 0; - for (;;) { - if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(1, 151, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_3, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 151, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_5); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 151, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_dim = __pyx_t_9; - __pyx_v_idx = __pyx_t_8; - __pyx_t_8 = (__pyx_t_8 + 1); - - /* "View.MemoryView":152 - * - * for idx, dim in enumerate(shape): - * if dim <= 0: # <<<<<<<<<<<<<< - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim - */ - __pyx_t_4 = ((__pyx_v_dim <= 0) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":153 - * for idx, dim in enumerate(shape): - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) # <<<<<<<<<<<<<< - * self._shape[idx] = dim - * - */ - __pyx_t_5 = __Pyx_PyInt_From_int(__pyx_v_idx); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_10 = PyTuple_New(2); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_10, 1, __pyx_t_6); - __pyx_t_5 = 0; - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyString_Format(__pyx_kp_s_Invalid_shape_in_axis_d_d, __pyx_t_10); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_6); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 153, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 153, __pyx_L1_error) - - /* "View.MemoryView":152 - * - * for idx, dim in enumerate(shape): - * if dim <= 0: # <<<<<<<<<<<<<< - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim - */ - } - - /* "View.MemoryView":154 - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim # <<<<<<<<<<<<<< - * - * cdef char order - */ - (__pyx_v_self->_shape[__pyx_v_idx]) = __pyx_v_dim; - - /* "View.MemoryView":151 - * - * - * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<< - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - */ - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":157 - * - * cdef char order - * if mode == 'fortran': # <<<<<<<<<<<<<< - * order = b'F' - * self.mode = u'fortran' - */ - __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_fortran, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 157, __pyx_L1_error) - if (__pyx_t_4) { - - /* "View.MemoryView":158 - * cdef char order - * if mode == 'fortran': - * order = b'F' # <<<<<<<<<<<<<< - * self.mode = u'fortran' - * elif mode == 'c': - */ - __pyx_v_order = 'F'; - - /* "View.MemoryView":159 - * if mode == 'fortran': - * order = b'F' - * self.mode = u'fortran' # <<<<<<<<<<<<<< - * elif mode == 'c': - * order = b'C' - */ - __Pyx_INCREF(__pyx_n_u_fortran); - __Pyx_GIVEREF(__pyx_n_u_fortran); - __Pyx_GOTREF(__pyx_v_self->mode); - __Pyx_DECREF(__pyx_v_self->mode); - __pyx_v_self->mode = __pyx_n_u_fortran; - - /* "View.MemoryView":157 - * - * cdef char order - * if mode == 'fortran': # <<<<<<<<<<<<<< - * order = b'F' - * self.mode = u'fortran' - */ - goto __pyx_L10; - } - - /* "View.MemoryView":160 - * order = b'F' - * self.mode = u'fortran' - * elif mode == 'c': # <<<<<<<<<<<<<< - * order = b'C' - * self.mode = u'c' - */ - __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_c, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 160, __pyx_L1_error) - if (likely(__pyx_t_4)) { - - /* "View.MemoryView":161 - * self.mode = u'fortran' - * elif mode == 'c': - * order = b'C' # <<<<<<<<<<<<<< - * self.mode = u'c' - * else: - */ - __pyx_v_order = 'C'; - - /* "View.MemoryView":162 - * elif mode == 'c': - * order = b'C' - * self.mode = u'c' # <<<<<<<<<<<<<< - * else: - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) - */ - __Pyx_INCREF(__pyx_n_u_c); - __Pyx_GIVEREF(__pyx_n_u_c); - __Pyx_GOTREF(__pyx_v_self->mode); - __Pyx_DECREF(__pyx_v_self->mode); - __pyx_v_self->mode = __pyx_n_u_c; - - /* "View.MemoryView":160 - * order = b'F' - * self.mode = u'fortran' - * elif mode == 'c': # <<<<<<<<<<<<<< - * order = b'C' - * self.mode = u'c' - */ - goto __pyx_L10; - } - - /* "View.MemoryView":164 - * self.mode = u'c' - * else: - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) # <<<<<<<<<<<<<< - * - * self.len = fill_contig_strides_array(self._shape, self._strides, - */ - /*else*/ { - __pyx_t_3 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Invalid_mode_expected_c_or_fortr, __pyx_v_mode); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 164, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_3); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 164, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 164, __pyx_L1_error) - } - __pyx_L10:; - - /* "View.MemoryView":166 - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) - * - * self.len = fill_contig_strides_array(self._shape, self._strides, # <<<<<<<<<<<<<< - * itemsize, self.ndim, order) - * - */ - __pyx_v_self->len = __pyx_fill_contig_strides_array(__pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_itemsize, __pyx_v_self->ndim, __pyx_v_order); - - /* "View.MemoryView":169 - * itemsize, self.ndim, order) - * - * self.free_data = allocate_buffer # <<<<<<<<<<<<<< - * self.dtype_is_object = format == b'O' - * if allocate_buffer: - */ - __pyx_v_self->free_data = __pyx_v_allocate_buffer; - - /* "View.MemoryView":170 - * - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' # <<<<<<<<<<<<<< - * if allocate_buffer: - * - */ - __pyx_t_10 = PyObject_RichCompare(__pyx_v_format, __pyx_n_b_O, Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 170, __pyx_L1_error) - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 170, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_v_self->dtype_is_object = __pyx_t_4; - - /* "View.MemoryView":171 - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' - * if allocate_buffer: # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_4 = (__pyx_v_allocate_buffer != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":174 - * - * - * self.data = malloc(self.len) # <<<<<<<<<<<<<< - * if not self.data: - * raise MemoryError("unable to allocate array data.") - */ - __pyx_v_self->data = ((char *)malloc(__pyx_v_self->len)); - - /* "View.MemoryView":175 - * - * self.data = malloc(self.len) - * if not self.data: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate array data.") - * - */ - __pyx_t_4 = ((!(__pyx_v_self->data != 0)) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":176 - * self.data = malloc(self.len) - * if not self.data: - * raise MemoryError("unable to allocate array data.") # <<<<<<<<<<<<<< - * - * if self.dtype_is_object: - */ - __pyx_t_10 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 176, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 176, __pyx_L1_error) - - /* "View.MemoryView":175 - * - * self.data = malloc(self.len) - * if not self.data: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate array data.") - * - */ - } - - /* "View.MemoryView":178 - * raise MemoryError("unable to allocate array data.") - * - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * p = self.data - * for i in range(self.len / itemsize): - */ - __pyx_t_4 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":179 - * - * if self.dtype_is_object: - * p = self.data # <<<<<<<<<<<<<< - * for i in range(self.len / itemsize): - * p[i] = Py_None - */ - __pyx_v_p = ((PyObject **)__pyx_v_self->data); - - /* "View.MemoryView":180 - * if self.dtype_is_object: - * p = self.data - * for i in range(self.len / itemsize): # <<<<<<<<<<<<<< - * p[i] = Py_None - * Py_INCREF(Py_None) - */ - if (unlikely(__pyx_v_itemsize == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 180, __pyx_L1_error) - } - else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_self->len))) { - PyErr_SetString(PyExc_OverflowError, "value too large to perform division"); - __PYX_ERR(1, 180, __pyx_L1_error) - } - __pyx_t_1 = __Pyx_div_Py_ssize_t(__pyx_v_self->len, __pyx_v_itemsize); - __pyx_t_9 = __pyx_t_1; - for (__pyx_t_11 = 0; __pyx_t_11 < __pyx_t_9; __pyx_t_11+=1) { - __pyx_v_i = __pyx_t_11; - - /* "View.MemoryView":181 - * p = self.data - * for i in range(self.len / itemsize): - * p[i] = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - (__pyx_v_p[__pyx_v_i]) = Py_None; - - /* "View.MemoryView":182 - * for i in range(self.len / itemsize): - * p[i] = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - Py_INCREF(Py_None); - } - - /* "View.MemoryView":178 - * raise MemoryError("unable to allocate array data.") - * - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * p = self.data - * for i in range(self.len / itemsize): - */ - } - - /* "View.MemoryView":171 - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' - * if allocate_buffer: # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":122 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_format); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":185 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * cdef int bufmode = -1 - * if self.mode == u"c": - */ - -/* Python wrapper */ -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(((struct __pyx_array_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_v_bufmode; - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - char *__pyx_t_4; - Py_ssize_t __pyx_t_5; - int __pyx_t_6; - Py_ssize_t *__pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - if (__pyx_v_info == NULL) { - PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete"); - return -1; - } - __Pyx_RefNannySetupContext("__getbuffer__", 0); - __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(__pyx_v_info->obj); - - /* "View.MemoryView":186 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 # <<<<<<<<<<<<<< - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - __pyx_v_bufmode = -1; - - /* "View.MemoryView":187 - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - * if self.mode == u"c": # <<<<<<<<<<<<<< - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - */ - __pyx_t_1 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_c, Py_EQ)); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 187, __pyx_L1_error) - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":188 - * cdef int bufmode = -1 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<< - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - __pyx_v_bufmode = (PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS); - - /* "View.MemoryView":187 - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - * if self.mode == u"c": # <<<<<<<<<<<<<< - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - */ - goto __pyx_L3; - } - - /* "View.MemoryView":189 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": # <<<<<<<<<<<<<< - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - */ - __pyx_t_2 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_fortran, Py_EQ)); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 189, __pyx_L1_error) - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":190 - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<< - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") - */ - __pyx_v_bufmode = (PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS); - - /* "View.MemoryView":189 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": # <<<<<<<<<<<<<< - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - */ - } - __pyx_L3:; - - /* "View.MemoryView":191 - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): # <<<<<<<<<<<<<< - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - */ - __pyx_t_1 = ((!((__pyx_v_flags & __pyx_v_bufmode) != 0)) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":192 - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") # <<<<<<<<<<<<<< - * info.buf = self.data - * info.len = self.len - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 192, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 192, __pyx_L1_error) - - /* "View.MemoryView":191 - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): # <<<<<<<<<<<<<< - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - */ - } - - /* "View.MemoryView":193 - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data # <<<<<<<<<<<<<< - * info.len = self.len - * info.ndim = self.ndim - */ - __pyx_t_4 = __pyx_v_self->data; - __pyx_v_info->buf = __pyx_t_4; - - /* "View.MemoryView":194 - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - * info.len = self.len # <<<<<<<<<<<<<< - * info.ndim = self.ndim - * info.shape = self._shape - */ - __pyx_t_5 = __pyx_v_self->len; - __pyx_v_info->len = __pyx_t_5; - - /* "View.MemoryView":195 - * info.buf = self.data - * info.len = self.len - * info.ndim = self.ndim # <<<<<<<<<<<<<< - * info.shape = self._shape - * info.strides = self._strides - */ - __pyx_t_6 = __pyx_v_self->ndim; - __pyx_v_info->ndim = __pyx_t_6; - - /* "View.MemoryView":196 - * info.len = self.len - * info.ndim = self.ndim - * info.shape = self._shape # <<<<<<<<<<<<<< - * info.strides = self._strides - * info.suboffsets = NULL - */ - __pyx_t_7 = __pyx_v_self->_shape; - __pyx_v_info->shape = __pyx_t_7; - - /* "View.MemoryView":197 - * info.ndim = self.ndim - * info.shape = self._shape - * info.strides = self._strides # <<<<<<<<<<<<<< - * info.suboffsets = NULL - * info.itemsize = self.itemsize - */ - __pyx_t_7 = __pyx_v_self->_strides; - __pyx_v_info->strides = __pyx_t_7; - - /* "View.MemoryView":198 - * info.shape = self._shape - * info.strides = self._strides - * info.suboffsets = NULL # <<<<<<<<<<<<<< - * info.itemsize = self.itemsize - * info.readonly = 0 - */ - __pyx_v_info->suboffsets = NULL; - - /* "View.MemoryView":199 - * info.strides = self._strides - * info.suboffsets = NULL - * info.itemsize = self.itemsize # <<<<<<<<<<<<<< - * info.readonly = 0 - * - */ - __pyx_t_5 = __pyx_v_self->itemsize; - __pyx_v_info->itemsize = __pyx_t_5; - - /* "View.MemoryView":200 - * info.suboffsets = NULL - * info.itemsize = self.itemsize - * info.readonly = 0 # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - __pyx_v_info->readonly = 0; - - /* "View.MemoryView":202 - * info.readonly = 0 - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.format - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":203 - * - * if flags & PyBUF_FORMAT: - * info.format = self.format # <<<<<<<<<<<<<< - * else: - * info.format = NULL - */ - __pyx_t_4 = __pyx_v_self->format; - __pyx_v_info->format = __pyx_t_4; - - /* "View.MemoryView":202 - * info.readonly = 0 - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.format - * else: - */ - goto __pyx_L5; - } - - /* "View.MemoryView":205 - * info.format = self.format - * else: - * info.format = NULL # <<<<<<<<<<<<<< - * - * info.obj = self - */ - /*else*/ { - __pyx_v_info->format = NULL; - } - __pyx_L5:; - - /* "View.MemoryView":207 - * info.format = NULL - * - * info.obj = self # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); - __pyx_v_info->obj = ((PyObject *)__pyx_v_self); - - /* "View.MemoryView":185 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * cdef int bufmode = -1 - * if self.mode == u"c": - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.array.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - if (__pyx_v_info->obj != NULL) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - goto __pyx_L2; - __pyx_L0:; - if (__pyx_v_info->obj == Py_None) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - __pyx_L2:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":211 - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - * - * def __dealloc__(array self): # <<<<<<<<<<<<<< - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - */ - -/* Python wrapper */ -static void __pyx_array___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_array___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self) { - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":212 - * - * def __dealloc__(array self): - * if self.callback_free_data != NULL: # <<<<<<<<<<<<<< - * self.callback_free_data(self.data) - * elif self.free_data: - */ - __pyx_t_1 = ((__pyx_v_self->callback_free_data != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":213 - * def __dealloc__(array self): - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) # <<<<<<<<<<<<<< - * elif self.free_data: - * if self.dtype_is_object: - */ - __pyx_v_self->callback_free_data(__pyx_v_self->data); - - /* "View.MemoryView":212 - * - * def __dealloc__(array self): - * if self.callback_free_data != NULL: # <<<<<<<<<<<<<< - * self.callback_free_data(self.data) - * elif self.free_data: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":214 - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - * elif self.free_data: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, - */ - __pyx_t_1 = (__pyx_v_self->free_data != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":215 - * self.callback_free_data(self.data) - * elif self.free_data: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - */ - __pyx_t_1 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":216 - * elif self.free_data: - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, # <<<<<<<<<<<<<< - * self._strides, self.ndim, False) - * free(self.data) - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_self->data, __pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_self->ndim, 0); - - /* "View.MemoryView":215 - * self.callback_free_data(self.data) - * elif self.free_data: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - */ - } - - /* "View.MemoryView":218 - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - * free(self.data) # <<<<<<<<<<<<<< - * PyObject_Free(self._shape) - * - */ - free(__pyx_v_self->data); - - /* "View.MemoryView":214 - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - * elif self.free_data: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, - */ - } - __pyx_L3:; - - /* "View.MemoryView":219 - * self._strides, self.ndim, False) - * free(self.data) - * PyObject_Free(self._shape) # <<<<<<<<<<<<<< - * - * @property - */ - PyObject_Free(__pyx_v_self->_shape); - - /* "View.MemoryView":211 - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - * - * def __dealloc__(array self): # <<<<<<<<<<<<<< - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":222 - * - * @property - * def memview(self): # <<<<<<<<<<<<<< - * return self.get_memview() - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_5array_7memview___get__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":223 - * @property - * def memview(self): - * return self.get_memview() # <<<<<<<<<<<<<< - * - * @cname('get_memview') - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = ((struct __pyx_vtabstruct_array *)__pyx_v_self->__pyx_vtab)->get_memview(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 223, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":222 - * - * @property - * def memview(self): # <<<<<<<<<<<<<< - * return self.get_memview() - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.memview.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":226 - * - * @cname('get_memview') - * cdef get_memview(self): # <<<<<<<<<<<<<< - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) - */ - -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self) { - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_memview", 0); - - /* "View.MemoryView":227 - * @cname('get_memview') - * cdef get_memview(self): - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE # <<<<<<<<<<<<<< - * return memoryview(self, flags, self.dtype_is_object) - * - */ - __pyx_v_flags = ((PyBUF_ANY_CONTIGUOUS | PyBUF_FORMAT) | PyBUF_WRITABLE); - - /* "View.MemoryView":228 - * cdef get_memview(self): - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) # <<<<<<<<<<<<<< - * - * def __len__(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 228, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":226 - * - * @cname('get_memview') - * cdef get_memview(self): # <<<<<<<<<<<<<< - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.array.get_memview", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":230 - * return memoryview(self, flags, self.dtype_is_object) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self._shape[0] - * - */ - -/* Python wrapper */ -static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self); /*proto*/ -static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__", 0); - - /* "View.MemoryView":231 - * - * def __len__(self): - * return self._shape[0] # <<<<<<<<<<<<<< - * - * def __getattr__(self, attr): - */ - __pyx_r = (__pyx_v_self->_shape[0]); - goto __pyx_L0; - - /* "View.MemoryView":230 - * return memoryview(self, flags, self.dtype_is_object) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self._shape[0] - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":233 - * return self._shape[0] - * - * def __getattr__(self, attr): # <<<<<<<<<<<<<< - * return getattr(self.memview, attr) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr); /*proto*/ -static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getattr__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_attr)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getattr__", 0); - - /* "View.MemoryView":234 - * - * def __getattr__(self, attr): - * return getattr(self.memview, attr) # <<<<<<<<<<<<<< - * - * def __getitem__(self, item): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 234, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetAttr(__pyx_t_1, __pyx_v_attr); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 234, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":233 - * return self._shape[0] - * - * def __getattr__(self, attr): # <<<<<<<<<<<<<< - * return getattr(self.memview, attr) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.array.__getattr__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":236 - * return getattr(self.memview, attr) - * - * def __getitem__(self, item): # <<<<<<<<<<<<<< - * return self.memview[item] - * - */ - -/* Python wrapper */ -static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item); /*proto*/ -static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getitem__", 0); - - /* "View.MemoryView":237 - * - * def __getitem__(self, item): - * return self.memview[item] # <<<<<<<<<<<<<< - * - * def __setitem__(self, item, value): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 237, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_item); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 237, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":236 - * return getattr(self.memview, attr) - * - * def __getitem__(self, item): # <<<<<<<<<<<<<< - * return self.memview[item] - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.array.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":239 - * return self.memview[item] - * - * def __setitem__(self, item, value): # <<<<<<<<<<<<<< - * self.memview[item] = value - * - */ - -/* Python wrapper */ -static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /*proto*/ -static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item), ((PyObject *)__pyx_v_value)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setitem__", 0); - - /* "View.MemoryView":240 - * - * def __setitem__(self, item, value): - * self.memview[item] = value # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 240, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (unlikely(PyObject_SetItem(__pyx_t_1, __pyx_v_item, __pyx_v_value) < 0)) __PYX_ERR(1, 240, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "View.MemoryView":239 - * return self.memview[item] - * - * def __setitem__(self, item, value): # <<<<<<<<<<<<<< - * self.memview[item] = value - * - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_array___reduce_cython__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_array_2__setstate_cython__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":244 - * - * @cname("__pyx_array_new") - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<< - * char *mode, char *buf): - * cdef array result - */ - -static struct __pyx_array_obj *__pyx_array_new(PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, char *__pyx_v_format, char *__pyx_v_mode, char *__pyx_v_buf) { - struct __pyx_array_obj *__pyx_v_result = 0; - struct __pyx_array_obj *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("array_cwrapper", 0); - - /* "View.MemoryView":248 - * cdef array result - * - * if buf == NULL: # <<<<<<<<<<<<<< - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - */ - __pyx_t_1 = ((__pyx_v_buf == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":249 - * - * if buf == NULL: - * result = array(shape, itemsize, format, mode.decode('ASCII')) # <<<<<<<<<<<<<< - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), - */ - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_4); - __pyx_t_2 = 0; - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 249, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":248 - * cdef array result - * - * if buf == NULL: # <<<<<<<<<<<<<< - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":251 - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<< - * allocate_buffer=False) - * result.data = buf - */ - /*else*/ { - __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyTuple_New(4); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_t_3); - __pyx_t_4 = 0; - __pyx_t_5 = 0; - __pyx_t_3 = 0; - - /* "View.MemoryView":252 - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), - * allocate_buffer=False) # <<<<<<<<<<<<<< - * result.data = buf - * - */ - __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_allocate_buffer, Py_False) < 0) __PYX_ERR(1, 252, __pyx_L1_error) - - /* "View.MemoryView":251 - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<< - * allocate_buffer=False) - * result.data = buf - */ - __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 251, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_5); - __pyx_t_5 = 0; - - /* "View.MemoryView":253 - * result = array(shape, itemsize, format, mode.decode('ASCII'), - * allocate_buffer=False) - * result.data = buf # <<<<<<<<<<<<<< - * - * return result - */ - __pyx_v_result->data = __pyx_v_buf; - } - __pyx_L3:; - - /* "View.MemoryView":255 - * result.data = buf - * - * return result # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(((PyObject *)__pyx_r)); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = __pyx_v_result; - goto __pyx_L0; - - /* "View.MemoryView":244 - * - * @cname("__pyx_array_new") - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<< - * char *mode, char *buf): - * cdef array result - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.array_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF((PyObject *)__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":281 - * cdef class Enum(object): - * cdef object name - * def __init__(self, name): # <<<<<<<<<<<<<< - * self.name = name - * def __repr__(self): - */ - -/* Python wrapper */ -static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_name = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_name,0}; - PyObject* values[1] = {0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_name)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__init__") < 0)) __PYX_ERR(1, 281, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 1) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - } - __pyx_v_name = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__init__", 1, 1, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 281, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.Enum.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), __pyx_v_name); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__", 0); - - /* "View.MemoryView":282 - * cdef object name - * def __init__(self, name): - * self.name = name # <<<<<<<<<<<<<< - * def __repr__(self): - * return self.name - */ - __Pyx_INCREF(__pyx_v_name); - __Pyx_GIVEREF(__pyx_v_name); - __Pyx_GOTREF(__pyx_v_self->name); - __Pyx_DECREF(__pyx_v_self->name); - __pyx_v_self->name = __pyx_v_name; - - /* "View.MemoryView":281 - * cdef class Enum(object): - * cdef object name - * def __init__(self, name): # <<<<<<<<<<<<<< - * self.name = name - * def __repr__(self): - */ - - /* function exit code */ - __pyx_r = 0; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":283 - * def __init__(self, name): - * self.name = name - * def __repr__(self): # <<<<<<<<<<<<<< - * return self.name - * - */ - -/* Python wrapper */ -static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0); - __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__", 0); - - /* "View.MemoryView":284 - * self.name = name - * def __repr__(self): - * return self.name # <<<<<<<<<<<<<< - * - * cdef generic = Enum("") - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->name); - __pyx_r = __pyx_v_self->name; - goto __pyx_L0; - - /* "View.MemoryView":283 - * def __init__(self, name): - * self.name = name - * def __repr__(self): # <<<<<<<<<<<<<< - * return self.name - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * cdef tuple state - * cdef object _dict - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_MemviewEnum___reduce_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self) { - PyObject *__pyx_v_state = 0; - PyObject *__pyx_v__dict = 0; - int __pyx_v_use_setstate; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":5 - * cdef object _dict - * cdef bint use_setstate - * state = (self.name,) # <<<<<<<<<<<<<< - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_self->name); - __Pyx_GIVEREF(__pyx_v_self->name); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_self->name); - __pyx_v_state = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "(tree fragment)":6 - * cdef bint use_setstate - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) # <<<<<<<<<<<<<< - * if _dict is not None: - * state += (_dict,) - */ - __pyx_t_1 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v__dict = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":7 - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: # <<<<<<<<<<<<<< - * state += (_dict,) - * use_setstate = True - */ - __pyx_t_2 = (__pyx_v__dict != Py_None); - __pyx_t_3 = (__pyx_t_2 != 0); - if (__pyx_t_3) { - - /* "(tree fragment)":8 - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: - * state += (_dict,) # <<<<<<<<<<<<<< - * use_setstate = True - * else: - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v__dict); - __Pyx_GIVEREF(__pyx_v__dict); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v__dict); - __pyx_t_4 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_4)); - __pyx_t_4 = 0; - - /* "(tree fragment)":9 - * if _dict is not None: - * state += (_dict,) - * use_setstate = True # <<<<<<<<<<<<<< - * else: - * use_setstate = self.name is not None - */ - __pyx_v_use_setstate = 1; - - /* "(tree fragment)":7 - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: # <<<<<<<<<<<<<< - * state += (_dict,) - * use_setstate = True - */ - goto __pyx_L3; - } - - /* "(tree fragment)":11 - * use_setstate = True - * else: - * use_setstate = self.name is not None # <<<<<<<<<<<<<< - * if use_setstate: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - */ - /*else*/ { - __pyx_t_3 = (__pyx_v_self->name != Py_None); - __pyx_v_use_setstate = __pyx_t_3; - } - __pyx_L3:; - - /* "(tree fragment)":12 - * else: - * use_setstate = self.name is not None - * if use_setstate: # <<<<<<<<<<<<<< - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - */ - __pyx_t_3 = (__pyx_v_use_setstate != 0); - if (__pyx_t_3) { - - /* "(tree fragment)":13 - * use_setstate = self.name is not None - * if use_setstate: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state # <<<<<<<<<<<<<< - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_INCREF(__pyx_int_184977713); - __Pyx_GIVEREF(__pyx_int_184977713); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyTuple_SET_ITEM(__pyx_t_1, 2, Py_None); - __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_1); - __Pyx_INCREF(__pyx_v_state); - __Pyx_GIVEREF(__pyx_v_state); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_v_state); - __pyx_t_4 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "(tree fragment)":12 - * else: - * use_setstate = self.name is not None - * if use_setstate: # <<<<<<<<<<<<<< - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - */ - } - - /* "(tree fragment)":15 - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_INCREF(__pyx_int_184977713); - __Pyx_GIVEREF(__pyx_int_184977713); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713); - __Pyx_INCREF(__pyx_v_state); - __Pyx_GIVEREF(__pyx_v_state); - PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_state); - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1); - __pyx_t_5 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - } - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * cdef tuple state - * cdef object _dict - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.Enum.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_state); - __Pyx_XDECREF(__pyx_v__dict); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":16 - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_MemviewEnum_2__setstate_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":17 - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): - * __pyx_unpickle_Enum__set_state(self, __pyx_state) # <<<<<<<<<<<<<< - */ - if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 17, __pyx_L1_error) - __pyx_t_1 = __pyx_unpickle_Enum__set_state(__pyx_v_self, ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":16 - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.Enum.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":298 - * - * @cname('__pyx_align_pointer') - * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<< - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory - */ - -static void *__pyx_align_pointer(void *__pyx_v_memory, size_t __pyx_v_alignment) { - Py_intptr_t __pyx_v_aligned_p; - size_t __pyx_v_offset; - void *__pyx_r; - int __pyx_t_1; - - /* "View.MemoryView":300 - * cdef void *align_pointer(void *memory, size_t alignment) nogil: - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory # <<<<<<<<<<<<<< - * cdef size_t offset - * - */ - __pyx_v_aligned_p = ((Py_intptr_t)__pyx_v_memory); - - /* "View.MemoryView":304 - * - * with cython.cdivision(True): - * offset = aligned_p % alignment # <<<<<<<<<<<<<< - * - * if offset > 0: - */ - __pyx_v_offset = (__pyx_v_aligned_p % __pyx_v_alignment); - - /* "View.MemoryView":306 - * offset = aligned_p % alignment - * - * if offset > 0: # <<<<<<<<<<<<<< - * aligned_p += alignment - offset - * - */ - __pyx_t_1 = ((__pyx_v_offset > 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":307 - * - * if offset > 0: - * aligned_p += alignment - offset # <<<<<<<<<<<<<< - * - * return aligned_p - */ - __pyx_v_aligned_p = (__pyx_v_aligned_p + (__pyx_v_alignment - __pyx_v_offset)); - - /* "View.MemoryView":306 - * offset = aligned_p % alignment - * - * if offset > 0: # <<<<<<<<<<<<<< - * aligned_p += alignment - offset - * - */ - } - - /* "View.MemoryView":309 - * aligned_p += alignment - offset - * - * return aligned_p # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = ((void *)__pyx_v_aligned_p); - goto __pyx_L0; - - /* "View.MemoryView":298 - * - * @cname('__pyx_align_pointer') - * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<< - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":345 - * cdef __Pyx_TypeInfo *typeinfo - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<< - * self.obj = obj - * self.flags = flags - */ - -/* Python wrapper */ -static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_obj = 0; - int __pyx_v_flags; - int __pyx_v_dtype_is_object; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_obj,&__pyx_n_s_flags,&__pyx_n_s_dtype_is_object,0}; - PyObject* values[3] = {0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_obj)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_flags)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, 1); __PYX_ERR(1, 345, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_dtype_is_object); - if (value) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 345, __pyx_L3_error) - } - } else { - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_obj = values[0]; - __pyx_v_flags = __Pyx_PyInt_As_int(values[1]); if (unlikely((__pyx_v_flags == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 345, __pyx_L3_error) - if (values[2]) { - __pyx_v_dtype_is_object = __Pyx_PyObject_IsTrue(values[2]); if (unlikely((__pyx_v_dtype_is_object == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 345, __pyx_L3_error) - } else { - __pyx_v_dtype_is_object = ((int)0); - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 345, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_obj, __pyx_v_flags, __pyx_v_dtype_is_object); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__cinit__", 0); - - /* "View.MemoryView":346 - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): - * self.obj = obj # <<<<<<<<<<<<<< - * self.flags = flags - * if type(self) is memoryview or obj is not None: - */ - __Pyx_INCREF(__pyx_v_obj); - __Pyx_GIVEREF(__pyx_v_obj); - __Pyx_GOTREF(__pyx_v_self->obj); - __Pyx_DECREF(__pyx_v_self->obj); - __pyx_v_self->obj = __pyx_v_obj; - - /* "View.MemoryView":347 - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): - * self.obj = obj - * self.flags = flags # <<<<<<<<<<<<<< - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - */ - __pyx_v_self->flags = __pyx_v_flags; - - /* "View.MemoryView":348 - * self.obj = obj - * self.flags = flags - * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<< - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - */ - __pyx_t_2 = (((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))) == ((PyObject *)__pyx_memoryview_type)); - __pyx_t_3 = (__pyx_t_2 != 0); - if (!__pyx_t_3) { - } else { - __pyx_t_1 = __pyx_t_3; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_3 = (__pyx_v_obj != Py_None); - __pyx_t_2 = (__pyx_t_3 != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (__pyx_t_1) { - - /* "View.MemoryView":349 - * self.flags = flags - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) # <<<<<<<<<<<<<< - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None - */ - __pyx_t_4 = __Pyx_GetBuffer(__pyx_v_obj, (&__pyx_v_self->view), __pyx_v_flags); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 349, __pyx_L1_error) - - /* "View.MemoryView":350 - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) - */ - __pyx_t_1 = ((((PyObject *)__pyx_v_self->view.obj) == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":351 - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_self->view))->obj = Py_None; - - /* "View.MemoryView":352 - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * global __pyx_memoryview_thread_locks_used - */ - Py_INCREF(Py_None); - - /* "View.MemoryView":350 - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) - */ - } - - /* "View.MemoryView":348 - * self.obj = obj - * self.flags = flags - * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<< - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - */ - } - - /* "View.MemoryView":355 - * - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<< - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - */ - __pyx_t_1 = ((__pyx_memoryview_thread_locks_used < 8) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":356 - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: - */ - __pyx_v_self->lock = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]); - - /* "View.MemoryView":357 - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 # <<<<<<<<<<<<<< - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - */ - __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used + 1); - - /* "View.MemoryView":355 - * - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<< - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - */ - } - - /* "View.MemoryView":358 - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: # <<<<<<<<<<<<<< - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - */ - __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":359 - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() # <<<<<<<<<<<<<< - * if self.lock is NULL: - * raise MemoryError - */ - __pyx_v_self->lock = PyThread_allocate_lock(); - - /* "View.MemoryView":360 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":361 - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - * raise MemoryError # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - PyErr_NoMemory(); __PYX_ERR(1, 361, __pyx_L1_error) - - /* "View.MemoryView":360 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - } - - /* "View.MemoryView":358 - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: # <<<<<<<<<<<<<< - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - */ - } - - /* "View.MemoryView":363 - * raise MemoryError - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":364 - * - * if flags & PyBUF_FORMAT: - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') # <<<<<<<<<<<<<< - * else: - * self.dtype_is_object = dtype_is_object - */ - __pyx_t_2 = (((__pyx_v_self->view.format[0]) == 'O') != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_2 = (((__pyx_v_self->view.format[1]) == '\x00') != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L11_bool_binop_done:; - __pyx_v_self->dtype_is_object = __pyx_t_1; - - /* "View.MemoryView":363 - * raise MemoryError - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - */ - goto __pyx_L10; - } - - /* "View.MemoryView":366 - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - * self.dtype_is_object = dtype_is_object # <<<<<<<<<<<<<< - * - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( - */ - /*else*/ { - __pyx_v_self->dtype_is_object = __pyx_v_dtype_is_object; - } - __pyx_L10:; - - /* "View.MemoryView":368 - * self.dtype_is_object = dtype_is_object - * - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( # <<<<<<<<<<<<<< - * &self.acquisition_count[0], sizeof(__pyx_atomic_int)) - * self.typeinfo = NULL - */ - __pyx_v_self->acquisition_count_aligned_p = ((__pyx_atomic_int *)__pyx_align_pointer(((void *)(&(__pyx_v_self->acquisition_count[0]))), (sizeof(__pyx_atomic_int)))); - - /* "View.MemoryView":370 - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( - * &self.acquisition_count[0], sizeof(__pyx_atomic_int)) - * self.typeinfo = NULL # <<<<<<<<<<<<<< - * - * def __dealloc__(memoryview self): - */ - __pyx_v_self->typeinfo = NULL; - - /* "View.MemoryView":345 - * cdef __Pyx_TypeInfo *typeinfo - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<< - * self.obj = obj - * self.flags = flags - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":372 - * self.typeinfo = NULL - * - * def __dealloc__(memoryview self): # <<<<<<<<<<<<<< - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - */ - -/* Python wrapper */ -static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self) { - int __pyx_v_i; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - PyThread_type_lock __pyx_t_6; - PyThread_type_lock __pyx_t_7; - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":373 - * - * def __dealloc__(memoryview self): - * if self.obj is not None: # <<<<<<<<<<<<<< - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - */ - __pyx_t_1 = (__pyx_v_self->obj != Py_None); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":374 - * def __dealloc__(memoryview self): - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) # <<<<<<<<<<<<<< - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - * - */ - __Pyx_ReleaseBuffer((&__pyx_v_self->view)); - - /* "View.MemoryView":373 - * - * def __dealloc__(memoryview self): - * if self.obj is not None: # <<<<<<<<<<<<<< - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":375 - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<< - * - * (<__pyx_buffer *> &self.view).obj = NULL - */ - __pyx_t_2 = ((((Py_buffer *)(&__pyx_v_self->view))->obj == Py_None) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":377 - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - * - * (<__pyx_buffer *> &self.view).obj = NULL # <<<<<<<<<<<<<< - * Py_DECREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_self->view))->obj = NULL; - - /* "View.MemoryView":378 - * - * (<__pyx_buffer *> &self.view).obj = NULL - * Py_DECREF(Py_None) # <<<<<<<<<<<<<< - * - * cdef int i - */ - Py_DECREF(Py_None); - - /* "View.MemoryView":375 - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<< - * - * (<__pyx_buffer *> &self.view).obj = NULL - */ - } - __pyx_L3:; - - /* "View.MemoryView":382 - * cdef int i - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: # <<<<<<<<<<<<<< - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - */ - __pyx_t_2 = ((__pyx_v_self->lock != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":383 - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): # <<<<<<<<<<<<<< - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - */ - __pyx_t_3 = __pyx_memoryview_thread_locks_used; - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) { - __pyx_v_i = __pyx_t_5; - - /* "View.MemoryView":384 - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - */ - __pyx_t_2 = (((__pyx_memoryview_thread_locks[__pyx_v_i]) == __pyx_v_self->lock) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":385 - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 # <<<<<<<<<<<<<< - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - */ - __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used - 1); - - /* "View.MemoryView":386 - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - */ - __pyx_t_2 = ((__pyx_v_i != __pyx_memoryview_thread_locks_used) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":388 - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) # <<<<<<<<<<<<<< - * break - * else: - */ - __pyx_t_6 = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]); - __pyx_t_7 = (__pyx_memoryview_thread_locks[__pyx_v_i]); - - /* "View.MemoryView":387 - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - * break - */ - (__pyx_memoryview_thread_locks[__pyx_v_i]) = __pyx_t_6; - (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]) = __pyx_t_7; - - /* "View.MemoryView":386 - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - */ - } - - /* "View.MemoryView":389 - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - * break # <<<<<<<<<<<<<< - * else: - * PyThread_free_lock(self.lock) - */ - goto __pyx_L6_break; - - /* "View.MemoryView":384 - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - */ - } - } - /*else*/ { - - /* "View.MemoryView":391 - * break - * else: - * PyThread_free_lock(self.lock) # <<<<<<<<<<<<<< - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: - */ - PyThread_free_lock(__pyx_v_self->lock); - } - __pyx_L6_break:; - - /* "View.MemoryView":382 - * cdef int i - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: # <<<<<<<<<<<<<< - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - */ - } - - /* "View.MemoryView":372 - * self.typeinfo = NULL - * - * def __dealloc__(memoryview self): # <<<<<<<<<<<<<< - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":393 - * PyThread_free_lock(self.lock) - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<< - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf - */ - -static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) { - Py_ssize_t __pyx_v_dim; - char *__pyx_v_itemp; - PyObject *__pyx_v_idx = NULL; - char *__pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - Py_ssize_t __pyx_t_6; - char *__pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_item_pointer", 0); - - /* "View.MemoryView":395 - * cdef char *get_item_pointer(memoryview self, object index) except NULL: - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf # <<<<<<<<<<<<<< - * - * for dim, idx in enumerate(index): - */ - __pyx_v_itemp = ((char *)__pyx_v_self->view.buf); - - /* "View.MemoryView":397 - * cdef char *itemp = self.view.buf - * - * for dim, idx in enumerate(index): # <<<<<<<<<<<<<< - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - */ - __pyx_t_1 = 0; - if (likely(PyList_CheckExact(__pyx_v_index)) || PyTuple_CheckExact(__pyx_v_index)) { - __pyx_t_2 = __pyx_v_index; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 397, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = Py_TYPE(__pyx_t_2)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 397, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 397, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 397, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 397, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 397, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } - } else { - __pyx_t_5 = __pyx_t_4(__pyx_t_2); - if (unlikely(!__pyx_t_5)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 397, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_5); - } - __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_5); - __pyx_t_5 = 0; - __pyx_v_dim = __pyx_t_1; - __pyx_t_1 = (__pyx_t_1 + 1); - - /* "View.MemoryView":398 - * - * for dim, idx in enumerate(index): - * itemp = pybuffer_index(&self.view, itemp, idx, dim) # <<<<<<<<<<<<<< - * - * return itemp - */ - __pyx_t_6 = __Pyx_PyIndex_AsSsize_t(__pyx_v_idx); if (unlikely((__pyx_t_6 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 398, __pyx_L1_error) - __pyx_t_7 = __pyx_pybuffer_index((&__pyx_v_self->view), __pyx_v_itemp, __pyx_t_6, __pyx_v_dim); if (unlikely(__pyx_t_7 == ((char *)NULL))) __PYX_ERR(1, 398, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_7; - - /* "View.MemoryView":397 - * cdef char *itemp = self.view.buf - * - * for dim, idx in enumerate(index): # <<<<<<<<<<<<<< - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":400 - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - * return itemp # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_itemp; - goto __pyx_L0; - - /* "View.MemoryView":393 - * PyThread_free_lock(self.lock) - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<< - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.get_item_pointer", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_idx); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":403 - * - * - * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<< - * if index is Ellipsis: - * return self - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index); /*proto*/ -static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) { - PyObject *__pyx_v_have_slices = NULL; - PyObject *__pyx_v_indices = NULL; - char *__pyx_v_itemp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - char *__pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getitem__", 0); - - /* "View.MemoryView":404 - * - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: # <<<<<<<<<<<<<< - * return self - * - */ - __pyx_t_1 = (__pyx_v_index == __pyx_builtin_Ellipsis); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":405 - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: - * return self # <<<<<<<<<<<<<< - * - * have_slices, indices = _unellipsify(index, self.view.ndim) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __pyx_r = ((PyObject *)__pyx_v_self); - goto __pyx_L0; - - /* "View.MemoryView":404 - * - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: # <<<<<<<<<<<<<< - * return self - * - */ - } - - /* "View.MemoryView":407 - * return self - * - * have_slices, indices = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<< - * - * cdef char *itemp - */ - __pyx_t_3 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 407, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (likely(__pyx_t_3 != Py_None)) { - PyObject* sequence = __pyx_t_3; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(1, 407, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(__pyx_t_5); - #else - __pyx_t_4 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 407, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 407, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } else { - __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 407, __pyx_L1_error) - } - __pyx_v_have_slices = __pyx_t_4; - __pyx_t_4 = 0; - __pyx_v_indices = __pyx_t_5; - __pyx_t_5 = 0; - - /* "View.MemoryView":410 - * - * cdef char *itemp - * if have_slices: # <<<<<<<<<<<<<< - * return memview_slice(self, indices) - * else: - */ - __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 410, __pyx_L1_error) - if (__pyx_t_2) { - - /* "View.MemoryView":411 - * cdef char *itemp - * if have_slices: - * return memview_slice(self, indices) # <<<<<<<<<<<<<< - * else: - * itemp = self.get_item_pointer(indices) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = ((PyObject *)__pyx_memview_slice(__pyx_v_self, __pyx_v_indices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 411, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":410 - * - * cdef char *itemp - * if have_slices: # <<<<<<<<<<<<<< - * return memview_slice(self, indices) - * else: - */ - } - - /* "View.MemoryView":413 - * return memview_slice(self, indices) - * else: - * itemp = self.get_item_pointer(indices) # <<<<<<<<<<<<<< - * return self.convert_item_to_object(itemp) - * - */ - /*else*/ { - __pyx_t_6 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_indices); if (unlikely(__pyx_t_6 == ((char *)NULL))) __PYX_ERR(1, 413, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_6; - - /* "View.MemoryView":414 - * else: - * itemp = self.get_item_pointer(indices) - * return self.convert_item_to_object(itemp) # <<<<<<<<<<<<<< - * - * def __setitem__(memoryview self, object index, object value): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->convert_item_to_object(__pyx_v_self, __pyx_v_itemp); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 414, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":403 - * - * - * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<< - * if index is Ellipsis: - * return self - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_have_slices); - __Pyx_XDECREF(__pyx_v_indices); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":416 - * return self.convert_item_to_object(itemp) - * - * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<< - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") - */ - -/* Python wrapper */ -static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /*proto*/ -static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index), ((PyObject *)__pyx_v_value)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - PyObject *__pyx_v_have_slices = NULL; - PyObject *__pyx_v_obj = NULL; - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setitem__", 0); - __Pyx_INCREF(__pyx_v_index); - - /* "View.MemoryView":417 - * - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: # <<<<<<<<<<<<<< - * raise TypeError("Cannot assign to read-only memoryview") - * - */ - __pyx_t_1 = (__pyx_v_self->view.readonly != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":418 - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") # <<<<<<<<<<<<<< - * - * have_slices, index = _unellipsify(index, self.view.ndim) - */ - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 418, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 418, __pyx_L1_error) - - /* "View.MemoryView":417 - * - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: # <<<<<<<<<<<<<< - * raise TypeError("Cannot assign to read-only memoryview") - * - */ - } - - /* "View.MemoryView":420 - * raise TypeError("Cannot assign to read-only memoryview") - * - * have_slices, index = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<< - * - * if have_slices: - */ - __pyx_t_2 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 420, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (likely(__pyx_t_2 != Py_None)) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(1, 420, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - #else - __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 420, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 420, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 420, __pyx_L1_error) - } - __pyx_v_have_slices = __pyx_t_3; - __pyx_t_3 = 0; - __Pyx_DECREF_SET(__pyx_v_index, __pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":422 - * have_slices, index = _unellipsify(index, self.view.ndim) - * - * if have_slices: # <<<<<<<<<<<<<< - * obj = self.is_slice(value) - * if obj: - */ - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 422, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":423 - * - * if have_slices: - * obj = self.is_slice(value) # <<<<<<<<<<<<<< - * if obj: - * self.setitem_slice_assignment(self[index], obj) - */ - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->is_slice(__pyx_v_self, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 423, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_v_obj = __pyx_t_2; - __pyx_t_2 = 0; - - /* "View.MemoryView":424 - * if have_slices: - * obj = self.is_slice(value) - * if obj: # <<<<<<<<<<<<<< - * self.setitem_slice_assignment(self[index], obj) - * else: - */ - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_obj); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 424, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":425 - * obj = self.is_slice(value) - * if obj: - * self.setitem_slice_assignment(self[index], obj) # <<<<<<<<<<<<<< - * else: - * self.setitem_slice_assign_scalar(self[index], value) - */ - __pyx_t_2 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 425, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assignment(__pyx_v_self, __pyx_t_2, __pyx_v_obj); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 425, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "View.MemoryView":424 - * if have_slices: - * obj = self.is_slice(value) - * if obj: # <<<<<<<<<<<<<< - * self.setitem_slice_assignment(self[index], obj) - * else: - */ - goto __pyx_L5; - } - - /* "View.MemoryView":427 - * self.setitem_slice_assignment(self[index], obj) - * else: - * self.setitem_slice_assign_scalar(self[index], value) # <<<<<<<<<<<<<< - * else: - * self.setitem_indexed(index, value) - */ - /*else*/ { - __pyx_t_4 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 427, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_memoryview_type))))) __PYX_ERR(1, 427, __pyx_L1_error) - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assign_scalar(__pyx_v_self, ((struct __pyx_memoryview_obj *)__pyx_t_4), __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 427, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_L5:; - - /* "View.MemoryView":422 - * have_slices, index = _unellipsify(index, self.view.ndim) - * - * if have_slices: # <<<<<<<<<<<<<< - * obj = self.is_slice(value) - * if obj: - */ - goto __pyx_L4; - } - - /* "View.MemoryView":429 - * self.setitem_slice_assign_scalar(self[index], value) - * else: - * self.setitem_indexed(index, value) # <<<<<<<<<<<<<< - * - * cdef is_slice(self, obj): - */ - /*else*/ { - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_indexed(__pyx_v_self, __pyx_v_index, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 429, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_L4:; - - /* "View.MemoryView":416 - * return self.convert_item_to_object(itemp) - * - * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<< - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.memoryview.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_have_slices); - __Pyx_XDECREF(__pyx_v_obj); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":431 - * self.setitem_indexed(index, value) - * - * cdef is_slice(self, obj): # <<<<<<<<<<<<<< - * if not isinstance(obj, memoryview): - * try: - */ - -static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_slice", 0); - __Pyx_INCREF(__pyx_v_obj); - - /* "View.MemoryView":432 - * - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<< - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - */ - __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_obj, __pyx_memoryview_type); - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":433 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_4, &__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_5); - /*try:*/ { - - /* "View.MemoryView":434 - * if not isinstance(obj, memoryview): - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<< - * self.dtype_is_object) - * except TypeError: - */ - __pyx_t_6 = __Pyx_PyInt_From_int(((__pyx_v_self->flags & (~PyBUF_WRITABLE)) | PyBUF_ANY_CONTIGUOUS)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 434, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_6); - - /* "View.MemoryView":435 - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) # <<<<<<<<<<<<<< - * except TypeError: - * return None - */ - __pyx_t_7 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 435, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_7); - - /* "View.MemoryView":434 - * if not isinstance(obj, memoryview): - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<< - * self.dtype_is_object) - * except TypeError: - */ - __pyx_t_8 = PyTuple_New(3); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 434, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_v_obj); - __Pyx_GIVEREF(__pyx_v_obj); - PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_v_obj); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_t_6); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_8, 2, __pyx_t_7); - __pyx_t_6 = 0; - __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_8, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 434, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF_SET(__pyx_v_obj, __pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":433 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - } - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - goto __pyx_L9_try_end; - __pyx_L4_error:; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "View.MemoryView":436 - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - * except TypeError: # <<<<<<<<<<<<<< - * return None - * - */ - __pyx_t_9 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_TypeError); - if (__pyx_t_9) { - __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_6) < 0) __PYX_ERR(1, 436, __pyx_L6_except_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_GOTREF(__pyx_t_8); - __Pyx_GOTREF(__pyx_t_6); - - /* "View.MemoryView":437 - * self.dtype_is_object) - * except TypeError: - * return None # <<<<<<<<<<<<<< - * - * return obj - */ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - goto __pyx_L7_except_return; - } - goto __pyx_L6_except_error; - __pyx_L6_except_error:; - - /* "View.MemoryView":433 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L1_error; - __pyx_L7_except_return:; - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L0; - __pyx_L9_try_end:; - } - - /* "View.MemoryView":432 - * - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<< - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - */ - } - - /* "View.MemoryView":439 - * return None - * - * return obj # <<<<<<<<<<<<<< - * - * cdef setitem_slice_assignment(self, dst, src): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_obj); - __pyx_r = __pyx_v_obj; - goto __pyx_L0; - - /* "View.MemoryView":431 - * self.setitem_indexed(index, value) - * - * cdef is_slice(self, obj): # <<<<<<<<<<<<<< - * if not isinstance(obj, memoryview): - * try: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_obj); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":441 - * return obj - * - * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - */ - -static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src) { - __Pyx_memviewslice __pyx_v_dst_slice; - __Pyx_memviewslice __pyx_v_src_slice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - __Pyx_memviewslice *__pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_slice_assignment", 0); - - /* "View.MemoryView":445 - * cdef __Pyx_memviewslice src_slice - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<< - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) - */ - if (!(likely(((__pyx_v_src) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_src, __pyx_memoryview_type))))) __PYX_ERR(1, 445, __pyx_L1_error) - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_src), (&__pyx_v_src_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 445, __pyx_L1_error) - - /* "View.MemoryView":446 - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], - * get_slice_from_memview(dst, &dst_slice)[0], # <<<<<<<<<<<<<< - * src.ndim, dst.ndim, self.dtype_is_object) - * - */ - if (!(likely(((__pyx_v_dst) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_dst, __pyx_memoryview_type))))) __PYX_ERR(1, 446, __pyx_L1_error) - __pyx_t_2 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_dst), (&__pyx_v_dst_slice)); if (unlikely(__pyx_t_2 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 446, __pyx_L1_error) - - /* "View.MemoryView":447 - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) # <<<<<<<<<<<<<< - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_src, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 447, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 447, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_dst, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 447, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 447, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":445 - * cdef __Pyx_memviewslice src_slice - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<< - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) - */ - __pyx_t_6 = __pyx_memoryview_copy_contents((__pyx_t_1[0]), (__pyx_t_2[0]), __pyx_t_4, __pyx_t_5, __pyx_v_self->dtype_is_object); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 445, __pyx_L1_error) - - /* "View.MemoryView":441 - * return obj - * - * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assignment", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":449 - * src.ndim, dst.ndim, self.dtype_is_object) - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<< - * cdef int array[128] - * cdef void *tmp = NULL - */ - -static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value) { - int __pyx_v_array[0x80]; - void *__pyx_v_tmp; - void *__pyx_v_item; - __Pyx_memviewslice *__pyx_v_dst_slice; - __Pyx_memviewslice __pyx_v_tmp_slice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - char const *__pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_slice_assign_scalar", 0); - - /* "View.MemoryView":451 - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): - * cdef int array[128] - * cdef void *tmp = NULL # <<<<<<<<<<<<<< - * cdef void *item - * - */ - __pyx_v_tmp = NULL; - - /* "View.MemoryView":456 - * cdef __Pyx_memviewslice *dst_slice - * cdef __Pyx_memviewslice tmp_slice - * dst_slice = get_slice_from_memview(dst, &tmp_slice) # <<<<<<<<<<<<<< - * - * if self.view.itemsize > sizeof(array): - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_dst, (&__pyx_v_tmp_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 456, __pyx_L1_error) - __pyx_v_dst_slice = __pyx_t_1; - - /* "View.MemoryView":458 - * dst_slice = get_slice_from_memview(dst, &tmp_slice) - * - * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<< - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - */ - __pyx_t_2 = ((((size_t)__pyx_v_self->view.itemsize) > (sizeof(__pyx_v_array))) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":459 - * - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) # <<<<<<<<<<<<<< - * if tmp == NULL: - * raise MemoryError - */ - __pyx_v_tmp = PyMem_Malloc(__pyx_v_self->view.itemsize); - - /* "View.MemoryView":460 - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * item = tmp - */ - __pyx_t_2 = ((__pyx_v_tmp == NULL) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":461 - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - * raise MemoryError # <<<<<<<<<<<<<< - * item = tmp - * else: - */ - PyErr_NoMemory(); __PYX_ERR(1, 461, __pyx_L1_error) - - /* "View.MemoryView":460 - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * item = tmp - */ - } - - /* "View.MemoryView":462 - * if tmp == NULL: - * raise MemoryError - * item = tmp # <<<<<<<<<<<<<< - * else: - * item = array - */ - __pyx_v_item = __pyx_v_tmp; - - /* "View.MemoryView":458 - * dst_slice = get_slice_from_memview(dst, &tmp_slice) - * - * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<< - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":464 - * item = tmp - * else: - * item = array # <<<<<<<<<<<<<< - * - * try: - */ - /*else*/ { - __pyx_v_item = ((void *)__pyx_v_array); - } - __pyx_L3:; - - /* "View.MemoryView":466 - * item = array - * - * try: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * ( item)[0] = value - */ - /*try:*/ { - - /* "View.MemoryView":467 - * - * try: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * ( item)[0] = value - * else: - */ - __pyx_t_2 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":468 - * try: - * if self.dtype_is_object: - * ( item)[0] = value # <<<<<<<<<<<<<< - * else: - * self.assign_item_from_object( item, value) - */ - (((PyObject **)__pyx_v_item)[0]) = ((PyObject *)__pyx_v_value); - - /* "View.MemoryView":467 - * - * try: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * ( item)[0] = value - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":470 - * ( item)[0] = value - * else: - * self.assign_item_from_object( item, value) # <<<<<<<<<<<<<< - * - * - */ - /*else*/ { - __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, ((char *)__pyx_v_item), __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 470, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_L8:; - - /* "View.MemoryView":474 - * - * - * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<< - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - */ - __pyx_t_2 = ((__pyx_v_self->view.suboffsets != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":475 - * - * if self.view.suboffsets != NULL: - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) # <<<<<<<<<<<<<< - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - * item, self.dtype_is_object) - */ - __pyx_t_3 = assert_direct_dimensions(__pyx_v_self->view.suboffsets, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 475, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":474 - * - * - * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<< - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - */ - } - - /* "View.MemoryView":476 - * if self.view.suboffsets != NULL: - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, # <<<<<<<<<<<<<< - * item, self.dtype_is_object) - * finally: - */ - __pyx_memoryview_slice_assign_scalar(__pyx_v_dst_slice, __pyx_v_dst->view.ndim, __pyx_v_self->view.itemsize, __pyx_v_item, __pyx_v_self->dtype_is_object); - } - - /* "View.MemoryView":479 - * item, self.dtype_is_object) - * finally: - * PyMem_Free(tmp) # <<<<<<<<<<<<<< - * - * cdef setitem_indexed(self, index, value): - */ - /*finally:*/ { - /*normal exit:*/{ - PyMem_Free(__pyx_v_tmp); - goto __pyx_L7; - } - __pyx_L6_error:; - /*exception exit:*/{ - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PY_MAJOR_VERSION >= 3) __Pyx_ExceptionSwap(&__pyx_t_10, &__pyx_t_11, &__pyx_t_12); - if ((PY_MAJOR_VERSION < 3) || unlikely(__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9) < 0)) __Pyx_ErrFetch(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_7); - __Pyx_XGOTREF(__pyx_t_8); - __Pyx_XGOTREF(__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_10); - __Pyx_XGOTREF(__pyx_t_11); - __Pyx_XGOTREF(__pyx_t_12); - __pyx_t_4 = __pyx_lineno; __pyx_t_5 = __pyx_clineno; __pyx_t_6 = __pyx_filename; - { - PyMem_Free(__pyx_v_tmp); - } - if (PY_MAJOR_VERSION >= 3) { - __Pyx_XGIVEREF(__pyx_t_10); - __Pyx_XGIVEREF(__pyx_t_11); - __Pyx_XGIVEREF(__pyx_t_12); - __Pyx_ExceptionReset(__pyx_t_10, __pyx_t_11, __pyx_t_12); - } - __Pyx_XGIVEREF(__pyx_t_7); - __Pyx_XGIVEREF(__pyx_t_8); - __Pyx_XGIVEREF(__pyx_t_9); - __Pyx_ErrRestore(__pyx_t_7, __pyx_t_8, __pyx_t_9); - __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; - __pyx_lineno = __pyx_t_4; __pyx_clineno = __pyx_t_5; __pyx_filename = __pyx_t_6; - goto __pyx_L1_error; - } - __pyx_L7:; - } - - /* "View.MemoryView":449 - * src.ndim, dst.ndim, self.dtype_is_object) - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<< - * cdef int array[128] - * cdef void *tmp = NULL - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assign_scalar", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":481 - * PyMem_Free(tmp) - * - * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<< - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) - */ - -static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - char *__pyx_v_itemp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - char *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_indexed", 0); - - /* "View.MemoryView":482 - * - * cdef setitem_indexed(self, index, value): - * cdef char *itemp = self.get_item_pointer(index) # <<<<<<<<<<<<<< - * self.assign_item_from_object(itemp, value) - * - */ - __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_index); if (unlikely(__pyx_t_1 == ((char *)NULL))) __PYX_ERR(1, 482, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_1; - - /* "View.MemoryView":483 - * cdef setitem_indexed(self, index, value): - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) # <<<<<<<<<<<<<< - * - * cdef convert_item_to_object(self, char *itemp): - */ - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 483, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":481 - * PyMem_Free(tmp) - * - * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<< - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_indexed", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":485 - * self.assign_item_from_object(itemp, value) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - -static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp) { - PyObject *__pyx_v_struct = NULL; - PyObject *__pyx_v_bytesitem = 0; - PyObject *__pyx_v_result = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - int __pyx_t_8; - PyObject *__pyx_t_9 = NULL; - size_t __pyx_t_10; - int __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("convert_item_to_object", 0); - - /* "View.MemoryView":488 - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - * import struct # <<<<<<<<<<<<<< - * cdef bytes bytesitem - * - */ - __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 488, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_struct = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":491 - * cdef bytes bytesitem - * - * bytesitem = itemp[:self.view.itemsize] # <<<<<<<<<<<<<< - * try: - * result = struct.unpack(self.view.format, bytesitem) - */ - __pyx_t_1 = __Pyx_PyBytes_FromStringAndSize(__pyx_v_itemp + 0, __pyx_v_self->view.itemsize - 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 491, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_bytesitem = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":492 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_2, &__pyx_t_3, &__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - /*try:*/ { - - /* "View.MemoryView":493 - * bytesitem = itemp[:self.view.itemsize] - * try: - * result = struct.unpack(self.view.format, bytesitem) # <<<<<<<<<<<<<< - * except struct.error: - * raise ValueError("Unable to convert item to object") - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_unpack); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_8 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_5)) { - PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem}; - __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) { - PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem}; - __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } else - #endif - { - __pyx_t_9 = PyTuple_New(2+__pyx_t_8); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_9); - if (__pyx_t_7) { - __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_7); __pyx_t_7 = NULL; - } - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_9, 0+__pyx_t_8, __pyx_t_6); - __Pyx_INCREF(__pyx_v_bytesitem); - __Pyx_GIVEREF(__pyx_v_bytesitem); - PyTuple_SET_ITEM(__pyx_t_9, 1+__pyx_t_8, __pyx_v_bytesitem); - __pyx_t_6 = 0; - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_9, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_result = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":492 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - } - - /* "View.MemoryView":497 - * raise ValueError("Unable to convert item to object") - * else: - * if len(self.view.format) == 1: # <<<<<<<<<<<<<< - * return result[0] - * return result - */ - /*else:*/ { - __pyx_t_10 = strlen(__pyx_v_self->view.format); - __pyx_t_11 = ((__pyx_t_10 == 1) != 0); - if (__pyx_t_11) { - - /* "View.MemoryView":498 - * else: - * if len(self.view.format) == 1: - * return result[0] # <<<<<<<<<<<<<< - * return result - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_result, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 498, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L6_except_return; - - /* "View.MemoryView":497 - * raise ValueError("Unable to convert item to object") - * else: - * if len(self.view.format) == 1: # <<<<<<<<<<<<<< - * return result[0] - * return result - */ - } - - /* "View.MemoryView":499 - * if len(self.view.format) == 1: - * return result[0] - * return result # <<<<<<<<<<<<<< - * - * cdef assign_item_from_object(self, char *itemp, object value): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_result); - __pyx_r = __pyx_v_result; - goto __pyx_L6_except_return; - } - __pyx_L3_error:; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "View.MemoryView":494 - * try: - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: # <<<<<<<<<<<<<< - * raise ValueError("Unable to convert item to object") - * else: - */ - __Pyx_ErrFetch(&__pyx_t_1, &__pyx_t_5, &__pyx_t_9); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_error); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 494, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_8 = __Pyx_PyErr_GivenExceptionMatches(__pyx_t_1, __pyx_t_6); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_ErrRestore(__pyx_t_1, __pyx_t_5, __pyx_t_9); - __pyx_t_1 = 0; __pyx_t_5 = 0; __pyx_t_9 = 0; - if (__pyx_t_8) { - __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_9, &__pyx_t_5, &__pyx_t_1) < 0) __PYX_ERR(1, 494, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GOTREF(__pyx_t_1); - - /* "View.MemoryView":495 - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - * raise ValueError("Unable to convert item to object") # <<<<<<<<<<<<<< - * else: - * if len(self.view.format) == 1: - */ - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 495, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_Raise(__pyx_t_6, 0, 0, 0); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __PYX_ERR(1, 495, __pyx_L5_except_error) - } - goto __pyx_L5_except_error; - __pyx_L5_except_error:; - - /* "View.MemoryView":492 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4); - goto __pyx_L1_error; - __pyx_L6_except_return:; - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4); - goto __pyx_L0; - } - - /* "View.MemoryView":485 - * self.assign_item_from_object(itemp, value) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_struct); - __Pyx_XDECREF(__pyx_v_bytesitem); - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":501 - * return result - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - -static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) { - PyObject *__pyx_v_struct = NULL; - char __pyx_v_c; - PyObject *__pyx_v_bytesvalue = 0; - Py_ssize_t __pyx_v_i; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - PyObject *__pyx_t_8 = NULL; - Py_ssize_t __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - char *__pyx_t_11; - char *__pyx_t_12; - char *__pyx_t_13; - char *__pyx_t_14; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assign_item_from_object", 0); - - /* "View.MemoryView":504 - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - * import struct # <<<<<<<<<<<<<< - * cdef char c - * cdef bytes bytesvalue - */ - __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 504, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_struct = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":509 - * cdef Py_ssize_t i - * - * if isinstance(value, tuple): # <<<<<<<<<<<<<< - * bytesvalue = struct.pack(self.view.format, *value) - * else: - */ - __pyx_t_2 = PyTuple_Check(__pyx_v_value); - __pyx_t_3 = (__pyx_t_2 != 0); - if (__pyx_t_3) { - - /* "View.MemoryView":510 - * - * if isinstance(value, tuple): - * bytesvalue = struct.pack(self.view.format, *value) # <<<<<<<<<<<<<< - * else: - * bytesvalue = struct.pack(self.view.format, value) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PySequence_Tuple(__pyx_v_value); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_6 = PyNumber_Add(__pyx_t_5, __pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_6, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 510, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 510, __pyx_L1_error) - __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":509 - * cdef Py_ssize_t i - * - * if isinstance(value, tuple): # <<<<<<<<<<<<<< - * bytesvalue = struct.pack(self.view.format, *value) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":512 - * bytesvalue = struct.pack(self.view.format, *value) - * else: - * bytesvalue = struct.pack(self.view.format, value) # <<<<<<<<<<<<<< - * - * for i, c in enumerate(bytesvalue): - */ - /*else*/ { - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_1 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_7 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_6)) { - PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value}; - __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_6)) { - PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value}; - __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else - #endif - { - __pyx_t_8 = PyTuple_New(2+__pyx_t_7); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - if (__pyx_t_5) { - __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_5); __pyx_t_5 = NULL; - } - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_8, 0+__pyx_t_7, __pyx_t_1); - __Pyx_INCREF(__pyx_v_value); - __Pyx_GIVEREF(__pyx_v_value); - PyTuple_SET_ITEM(__pyx_t_8, 1+__pyx_t_7, __pyx_v_value); - __pyx_t_1 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_8, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 512, __pyx_L1_error) - __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4); - __pyx_t_4 = 0; - } - __pyx_L3:; - - /* "View.MemoryView":514 - * bytesvalue = struct.pack(self.view.format, value) - * - * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<< - * itemp[i] = c - * - */ - __pyx_t_9 = 0; - if (unlikely(__pyx_v_bytesvalue == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' is not iterable"); - __PYX_ERR(1, 514, __pyx_L1_error) - } - __Pyx_INCREF(__pyx_v_bytesvalue); - __pyx_t_10 = __pyx_v_bytesvalue; - __pyx_t_12 = PyBytes_AS_STRING(__pyx_t_10); - __pyx_t_13 = (__pyx_t_12 + PyBytes_GET_SIZE(__pyx_t_10)); - for (__pyx_t_14 = __pyx_t_12; __pyx_t_14 < __pyx_t_13; __pyx_t_14++) { - __pyx_t_11 = __pyx_t_14; - __pyx_v_c = (__pyx_t_11[0]); - - /* "View.MemoryView":515 - * - * for i, c in enumerate(bytesvalue): - * itemp[i] = c # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - __pyx_v_i = __pyx_t_9; - - /* "View.MemoryView":514 - * bytesvalue = struct.pack(self.view.format, value) - * - * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<< - * itemp[i] = c - * - */ - __pyx_t_9 = (__pyx_t_9 + 1); - - /* "View.MemoryView":515 - * - * for i, c in enumerate(bytesvalue): - * itemp[i] = c # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - (__pyx_v_itemp[__pyx_v_i]) = __pyx_v_c; - } - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "View.MemoryView":501 - * return result - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_AddTraceback("View.MemoryView.memoryview.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_struct); - __Pyx_XDECREF(__pyx_v_bytesvalue); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":518 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") - */ - -/* Python wrapper */ -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - Py_ssize_t *__pyx_t_4; - char *__pyx_t_5; - void *__pyx_t_6; - int __pyx_t_7; - Py_ssize_t __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - if (__pyx_v_info == NULL) { - PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete"); - return -1; - } - __Pyx_RefNannySetupContext("__getbuffer__", 0); - __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(__pyx_v_info->obj); - - /* "View.MemoryView":519 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<< - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - */ - __pyx_t_2 = ((__pyx_v_flags & PyBUF_WRITABLE) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_2 = (__pyx_v_self->view.readonly != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":520 - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") # <<<<<<<<<<<<<< - * - * if flags & PyBUF_ND: - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__11, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 520, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 520, __pyx_L1_error) - - /* "View.MemoryView":519 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<< - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - */ - } - - /* "View.MemoryView":522 - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - * if flags & PyBUF_ND: # <<<<<<<<<<<<<< - * info.shape = self.view.shape - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_ND) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":523 - * - * if flags & PyBUF_ND: - * info.shape = self.view.shape # <<<<<<<<<<<<<< - * else: - * info.shape = NULL - */ - __pyx_t_4 = __pyx_v_self->view.shape; - __pyx_v_info->shape = __pyx_t_4; - - /* "View.MemoryView":522 - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - * if flags & PyBUF_ND: # <<<<<<<<<<<<<< - * info.shape = self.view.shape - * else: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":525 - * info.shape = self.view.shape - * else: - * info.shape = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_STRIDES: - */ - /*else*/ { - __pyx_v_info->shape = NULL; - } - __pyx_L6:; - - /* "View.MemoryView":527 - * info.shape = NULL - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.strides = self.view.strides - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_STRIDES) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":528 - * - * if flags & PyBUF_STRIDES: - * info.strides = self.view.strides # <<<<<<<<<<<<<< - * else: - * info.strides = NULL - */ - __pyx_t_4 = __pyx_v_self->view.strides; - __pyx_v_info->strides = __pyx_t_4; - - /* "View.MemoryView":527 - * info.shape = NULL - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.strides = self.view.strides - * else: - */ - goto __pyx_L7; - } - - /* "View.MemoryView":530 - * info.strides = self.view.strides - * else: - * info.strides = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_INDIRECT: - */ - /*else*/ { - __pyx_v_info->strides = NULL; - } - __pyx_L7:; - - /* "View.MemoryView":532 - * info.strides = NULL - * - * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<< - * info.suboffsets = self.view.suboffsets - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_INDIRECT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":533 - * - * if flags & PyBUF_INDIRECT: - * info.suboffsets = self.view.suboffsets # <<<<<<<<<<<<<< - * else: - * info.suboffsets = NULL - */ - __pyx_t_4 = __pyx_v_self->view.suboffsets; - __pyx_v_info->suboffsets = __pyx_t_4; - - /* "View.MemoryView":532 - * info.strides = NULL - * - * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<< - * info.suboffsets = self.view.suboffsets - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":535 - * info.suboffsets = self.view.suboffsets - * else: - * info.suboffsets = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - /*else*/ { - __pyx_v_info->suboffsets = NULL; - } - __pyx_L8:; - - /* "View.MemoryView":537 - * info.suboffsets = NULL - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.view.format - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":538 - * - * if flags & PyBUF_FORMAT: - * info.format = self.view.format # <<<<<<<<<<<<<< - * else: - * info.format = NULL - */ - __pyx_t_5 = __pyx_v_self->view.format; - __pyx_v_info->format = __pyx_t_5; - - /* "View.MemoryView":537 - * info.suboffsets = NULL - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.view.format - * else: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":540 - * info.format = self.view.format - * else: - * info.format = NULL # <<<<<<<<<<<<<< - * - * info.buf = self.view.buf - */ - /*else*/ { - __pyx_v_info->format = NULL; - } - __pyx_L9:; - - /* "View.MemoryView":542 - * info.format = NULL - * - * info.buf = self.view.buf # <<<<<<<<<<<<<< - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize - */ - __pyx_t_6 = __pyx_v_self->view.buf; - __pyx_v_info->buf = __pyx_t_6; - - /* "View.MemoryView":543 - * - * info.buf = self.view.buf - * info.ndim = self.view.ndim # <<<<<<<<<<<<<< - * info.itemsize = self.view.itemsize - * info.len = self.view.len - */ - __pyx_t_7 = __pyx_v_self->view.ndim; - __pyx_v_info->ndim = __pyx_t_7; - - /* "View.MemoryView":544 - * info.buf = self.view.buf - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize # <<<<<<<<<<<<<< - * info.len = self.view.len - * info.readonly = self.view.readonly - */ - __pyx_t_8 = __pyx_v_self->view.itemsize; - __pyx_v_info->itemsize = __pyx_t_8; - - /* "View.MemoryView":545 - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize - * info.len = self.view.len # <<<<<<<<<<<<<< - * info.readonly = self.view.readonly - * info.obj = self - */ - __pyx_t_8 = __pyx_v_self->view.len; - __pyx_v_info->len = __pyx_t_8; - - /* "View.MemoryView":546 - * info.itemsize = self.view.itemsize - * info.len = self.view.len - * info.readonly = self.view.readonly # <<<<<<<<<<<<<< - * info.obj = self - * - */ - __pyx_t_1 = __pyx_v_self->view.readonly; - __pyx_v_info->readonly = __pyx_t_1; - - /* "View.MemoryView":547 - * info.len = self.view.len - * info.readonly = self.view.readonly - * info.obj = self # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); - __pyx_v_info->obj = ((PyObject *)__pyx_v_self); - - /* "View.MemoryView":518 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - if (__pyx_v_info->obj != NULL) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - goto __pyx_L2; - __pyx_L0:; - if (__pyx_v_info->obj == Py_None) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - __pyx_L2:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":553 - * - * @property - * def T(self): # <<<<<<<<<<<<<< - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - struct __pyx_memoryviewslice_obj *__pyx_v_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":554 - * @property - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) # <<<<<<<<<<<<<< - * transpose_memslice(&result.from_slice) - * return result - */ - __pyx_t_1 = __pyx_memoryview_copy_object(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 554, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_memoryviewslice_type))))) __PYX_ERR(1, 554, __pyx_L1_error) - __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":555 - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) # <<<<<<<<<<<<<< - * return result - * - */ - __pyx_t_2 = __pyx_memslice_transpose((&__pyx_v_result->from_slice)); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 555, __pyx_L1_error) - - /* "View.MemoryView":556 - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - * return result # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":553 - * - * @property - * def T(self): # <<<<<<<<<<<<<< - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.T.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":559 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.obj - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":560 - * @property - * def base(self): - * return self.obj # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->obj); - __pyx_r = __pyx_v_self->obj; - goto __pyx_L0; - - /* "View.MemoryView":559 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.obj - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":563 - * - * @property - * def shape(self): # <<<<<<<<<<<<<< - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_length; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":564 - * @property - * def shape(self): - * return tuple([length for length in self.view.shape[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 564, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim); - for (__pyx_t_4 = __pyx_v_self->view.shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_v_length = (__pyx_t_2[0]); - __pyx_t_5 = PyInt_FromSsize_t(__pyx_v_length); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 564, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_5))) __PYX_ERR(1, 564, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_t_5 = PyList_AsTuple(((PyObject*)__pyx_t_1)); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 564, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "View.MemoryView":563 - * - * @property - * def shape(self): # <<<<<<<<<<<<<< - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.shape.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":567 - * - * @property - * def strides(self): # <<<<<<<<<<<<<< - * if self.view.strides == NULL: - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_stride; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":568 - * @property - * def strides(self): - * if self.view.strides == NULL: # <<<<<<<<<<<<<< - * - * raise ValueError("Buffer view does not expose strides") - */ - __pyx_t_1 = ((__pyx_v_self->view.strides == NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":570 - * if self.view.strides == NULL: - * - * raise ValueError("Buffer view does not expose strides") # <<<<<<<<<<<<<< - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - */ - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__12, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 570, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 570, __pyx_L1_error) - - /* "View.MemoryView":568 - * @property - * def strides(self): - * if self.view.strides == NULL: # <<<<<<<<<<<<<< - * - * raise ValueError("Buffer view does not expose strides") - */ - } - - /* "View.MemoryView":572 - * raise ValueError("Buffer view does not expose strides") - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = (__pyx_v_self->view.strides + __pyx_v_self->view.ndim); - for (__pyx_t_5 = __pyx_v_self->view.strides; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) { - __pyx_t_3 = __pyx_t_5; - __pyx_v_stride = (__pyx_t_3[0]); - __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_stride); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_6))) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_t_6 = PyList_AsTuple(((PyObject*)__pyx_t_2)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "View.MemoryView":567 - * - * @property - * def strides(self): # <<<<<<<<<<<<<< - * if self.view.strides == NULL: - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.memoryview.strides.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":575 - * - * @property - * def suboffsets(self): # <<<<<<<<<<<<<< - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - Py_ssize_t *__pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":576 - * @property - * def suboffsets(self): - * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<< - * return (-1,) * self.view.ndim - * - */ - __pyx_t_1 = ((__pyx_v_self->view.suboffsets == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":577 - * def suboffsets(self): - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim # <<<<<<<<<<<<<< - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_tuple__13, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":576 - * @property - * def suboffsets(self): - * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<< - * return (-1,) * self.view.ndim - * - */ - } - - /* "View.MemoryView":579 - * return (-1,) * self.view.ndim - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = (__pyx_v_self->view.suboffsets + __pyx_v_self->view.ndim); - for (__pyx_t_6 = __pyx_v_self->view.suboffsets; __pyx_t_6 < __pyx_t_5; __pyx_t_6++) { - __pyx_t_4 = __pyx_t_6; - __pyx_v_suboffset = (__pyx_t_4[0]); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_suboffset); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_3, (PyObject*)__pyx_t_2))) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = PyList_AsTuple(((PyObject*)__pyx_t_3)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":575 - * - * @property - * def suboffsets(self): # <<<<<<<<<<<<<< - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.suboffsets.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":582 - * - * @property - * def ndim(self): # <<<<<<<<<<<<<< - * return self.view.ndim - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":583 - * @property - * def ndim(self): - * return self.view.ndim # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 583, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":582 - * - * @property - * def ndim(self): # <<<<<<<<<<<<<< - * return self.view.ndim - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.ndim.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":586 - * - * @property - * def itemsize(self): # <<<<<<<<<<<<<< - * return self.view.itemsize - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":587 - * @property - * def itemsize(self): - * return self.view.itemsize # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 587, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":586 - * - * @property - * def itemsize(self): # <<<<<<<<<<<<<< - * return self.view.itemsize - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.itemsize.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":590 - * - * @property - * def nbytes(self): # <<<<<<<<<<<<<< - * return self.size * self.view.itemsize - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":591 - * @property - * def nbytes(self): - * return self.size * self.view.itemsize # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 591, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 591, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 591, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":590 - * - * @property - * def nbytes(self): # <<<<<<<<<<<<<< - * return self.size * self.view.itemsize - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.nbytes.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":594 - * - * @property - * def size(self): # <<<<<<<<<<<<<< - * if self._size is None: - * result = 1 - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_v_result = NULL; - PyObject *__pyx_v_length = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":595 - * @property - * def size(self): - * if self._size is None: # <<<<<<<<<<<<<< - * result = 1 - * - */ - __pyx_t_1 = (__pyx_v_self->_size == Py_None); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":596 - * def size(self): - * if self._size is None: - * result = 1 # <<<<<<<<<<<<<< - * - * for length in self.view.shape[:self.view.ndim]: - */ - __Pyx_INCREF(__pyx_int_1); - __pyx_v_result = __pyx_int_1; - - /* "View.MemoryView":598 - * result = 1 - * - * for length in self.view.shape[:self.view.ndim]: # <<<<<<<<<<<<<< - * result *= length - * - */ - __pyx_t_4 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim); - for (__pyx_t_5 = __pyx_v_self->view.shape; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) { - __pyx_t_3 = __pyx_t_5; - __pyx_t_6 = PyInt_FromSsize_t((__pyx_t_3[0])); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 598, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_6); - __pyx_t_6 = 0; - - /* "View.MemoryView":599 - * - * for length in self.view.shape[:self.view.ndim]: - * result *= length # <<<<<<<<<<<<<< - * - * self._size = result - */ - __pyx_t_6 = PyNumber_InPlaceMultiply(__pyx_v_result, __pyx_v_length); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 599, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF_SET(__pyx_v_result, __pyx_t_6); - __pyx_t_6 = 0; - } - - /* "View.MemoryView":601 - * result *= length - * - * self._size = result # <<<<<<<<<<<<<< - * - * return self._size - */ - __Pyx_INCREF(__pyx_v_result); - __Pyx_GIVEREF(__pyx_v_result); - __Pyx_GOTREF(__pyx_v_self->_size); - __Pyx_DECREF(__pyx_v_self->_size); - __pyx_v_self->_size = __pyx_v_result; - - /* "View.MemoryView":595 - * @property - * def size(self): - * if self._size is None: # <<<<<<<<<<<<<< - * result = 1 - * - */ - } - - /* "View.MemoryView":603 - * self._size = result - * - * return self._size # <<<<<<<<<<<<<< - * - * def __len__(self): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->_size); - __pyx_r = __pyx_v_self->_size; - goto __pyx_L0; - - /* "View.MemoryView":594 - * - * @property - * def size(self): # <<<<<<<<<<<<<< - * if self._size is None: - * result = 1 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.memoryview.size.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XDECREF(__pyx_v_length); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":605 - * return self._size - * - * def __len__(self): # <<<<<<<<<<<<<< - * if self.view.ndim >= 1: - * return self.view.shape[0] - */ - -/* Python wrapper */ -static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self); /*proto*/ -static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("__len__", 0); - - /* "View.MemoryView":606 - * - * def __len__(self): - * if self.view.ndim >= 1: # <<<<<<<<<<<<<< - * return self.view.shape[0] - * - */ - __pyx_t_1 = ((__pyx_v_self->view.ndim >= 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":607 - * def __len__(self): - * if self.view.ndim >= 1: - * return self.view.shape[0] # <<<<<<<<<<<<<< - * - * return 0 - */ - __pyx_r = (__pyx_v_self->view.shape[0]); - goto __pyx_L0; - - /* "View.MemoryView":606 - * - * def __len__(self): - * if self.view.ndim >= 1: # <<<<<<<<<<<<<< - * return self.view.shape[0] - * - */ - } - - /* "View.MemoryView":609 - * return self.view.shape[0] - * - * return 0 # <<<<<<<<<<<<<< - * - * def __repr__(self): - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":605 - * return self._size - * - * def __len__(self): # <<<<<<<<<<<<<< - * if self.view.ndim >= 1: - * return self.view.shape[0] - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":611 - * return 0 - * - * def __repr__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__, - * id(self)) - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__repr__", 0); - - /* "View.MemoryView":612 - * - * def __repr__(self): - * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<< - * id(self)) - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":613 - * def __repr__(self): - * return "" % (self.base.__class__.__name__, - * id(self)) # <<<<<<<<<<<<<< - * - * def __str__(self): - */ - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_id, ((PyObject *)__pyx_v_self)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 613, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "View.MemoryView":612 - * - * def __repr__(self): - * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<< - * id(self)) - * - */ - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 612, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":611 - * return 0 - * - * def __repr__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__, - * id(self)) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.__repr__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":615 - * id(self)) - * - * def __str__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__,) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__str__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__str__", 0); - - /* "View.MemoryView":616 - * - * def __str__(self): - * return "" % (self.base.__class__.__name__,) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_object, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 616, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":615 - * id(self)) - * - * def __str__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__,) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.__str__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":619 - * - * - * def is_c_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("is_c_contig (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice *__pyx_v_mslice; - __Pyx_memviewslice __pyx_v_tmp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_c_contig", 0); - - /* "View.MemoryView":622 - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<< - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 622, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":623 - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) - * return slice_is_contig(mslice[0], 'C', self.view.ndim) # <<<<<<<<<<<<<< - * - * def is_f_contig(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'C', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 623, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":619 - * - * - * def is_c_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_c_contig", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":625 - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - * def is_f_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("is_f_contig (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice *__pyx_v_mslice; - __Pyx_memviewslice __pyx_v_tmp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_f_contig", 0); - - /* "View.MemoryView":628 - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<< - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 628, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":629 - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) - * return slice_is_contig(mslice[0], 'F', self.view.ndim) # <<<<<<<<<<<<<< - * - * def copy(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'F', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 629, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":625 - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - * def is_f_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_f_contig", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":631 - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - * def copy(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("copy (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice __pyx_v_mslice; - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("copy", 0); - - /* "View.MemoryView":633 - * def copy(self): - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS # <<<<<<<<<<<<<< - * - * slice_copy(self, &mslice) - */ - __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_F_CONTIGUOUS)); - - /* "View.MemoryView":635 - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - * - * slice_copy(self, &mslice) # <<<<<<<<<<<<<< - * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, - * self.view.itemsize, - */ - __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_mslice)); - - /* "View.MemoryView":636 - * - * slice_copy(self, &mslice) - * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, # <<<<<<<<<<<<<< - * self.view.itemsize, - * flags|PyBUF_C_CONTIGUOUS, - */ - __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_mslice), ((char *)"c"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_C_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 636, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":641 - * self.dtype_is_object) - * - * return memoryview_copy_from_slice(self, &mslice) # <<<<<<<<<<<<<< - * - * def copy_fortran(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_mslice)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 641, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":631 - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - * def copy(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.copy", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":643 - * return memoryview_copy_from_slice(self, &mslice) - * - * def copy_fortran(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("copy_fortran (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice __pyx_v_src; - __Pyx_memviewslice __pyx_v_dst; - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("copy_fortran", 0); - - /* "View.MemoryView":645 - * def copy_fortran(self): - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS # <<<<<<<<<<<<<< - * - * slice_copy(self, &src) - */ - __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_C_CONTIGUOUS)); - - /* "View.MemoryView":647 - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - * - * slice_copy(self, &src) # <<<<<<<<<<<<<< - * dst = slice_copy_contig(&src, "fortran", self.view.ndim, - * self.view.itemsize, - */ - __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_src)); - - /* "View.MemoryView":648 - * - * slice_copy(self, &src) - * dst = slice_copy_contig(&src, "fortran", self.view.ndim, # <<<<<<<<<<<<<< - * self.view.itemsize, - * flags|PyBUF_F_CONTIGUOUS, - */ - __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_src), ((char *)"fortran"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_F_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 648, __pyx_L1_error) - __pyx_v_dst = __pyx_t_1; - - /* "View.MemoryView":653 - * self.dtype_is_object) - * - * return memoryview_copy_from_slice(self, &dst) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_dst)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 653, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":643 - * return memoryview_copy_from_slice(self, &mslice) - * - * def copy_fortran(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.copy_fortran", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryview___reduce_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__14, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryview_2__setstate_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__15, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":657 - * - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<< - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - */ - -static PyObject *__pyx_memoryview_new(PyObject *__pyx_v_o, int __pyx_v_flags, int __pyx_v_dtype_is_object, __Pyx_TypeInfo *__pyx_v_typeinfo) { - struct __pyx_memoryview_obj *__pyx_v_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_cwrapper", 0); - - /* "View.MemoryView":658 - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): - * cdef memoryview result = memoryview(o, flags, dtype_is_object) # <<<<<<<<<<<<<< - * result.typeinfo = typeinfo - * return result - */ - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_o); - __Pyx_GIVEREF(__pyx_v_o); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_o); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 658, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_memoryview_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":659 - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo # <<<<<<<<<<<<<< - * return result - * - */ - __pyx_v_result->typeinfo = __pyx_v_typeinfo; - - /* "View.MemoryView":660 - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - * return result # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_check') - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":657 - * - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<< - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":663 - * - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<< - * return isinstance(o, memoryview) - * - */ - -static CYTHON_INLINE int __pyx_memoryview_check(PyObject *__pyx_v_o) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("memoryview_check", 0); - - /* "View.MemoryView":664 - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): - * return isinstance(o, memoryview) # <<<<<<<<<<<<<< - * - * cdef tuple _unellipsify(object index, int ndim): - */ - __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_o, __pyx_memoryview_type); - __pyx_r = __pyx_t_1; - goto __pyx_L0; - - /* "View.MemoryView":663 - * - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<< - * return isinstance(o, memoryview) - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":666 - * return isinstance(o, memoryview) - * - * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<< - * """ - * Replace all ellipses with full slices and fill incomplete indices with - */ - -static PyObject *_unellipsify(PyObject *__pyx_v_index, int __pyx_v_ndim) { - PyObject *__pyx_v_tup = NULL; - PyObject *__pyx_v_result = NULL; - int __pyx_v_have_slices; - int __pyx_v_seen_ellipsis; - CYTHON_UNUSED PyObject *__pyx_v_idx = NULL; - PyObject *__pyx_v_item = NULL; - Py_ssize_t __pyx_v_nslices; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - Py_ssize_t __pyx_t_5; - PyObject *(*__pyx_t_6)(PyObject *); - PyObject *__pyx_t_7 = NULL; - Py_ssize_t __pyx_t_8; - int __pyx_t_9; - int __pyx_t_10; - PyObject *__pyx_t_11 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_unellipsify", 0); - - /* "View.MemoryView":671 - * full slices. - * """ - * if not isinstance(index, tuple): # <<<<<<<<<<<<<< - * tup = (index,) - * else: - */ - __pyx_t_1 = PyTuple_Check(__pyx_v_index); - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":672 - * """ - * if not isinstance(index, tuple): - * tup = (index,) # <<<<<<<<<<<<<< - * else: - * tup = index - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 672, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_index); - __Pyx_GIVEREF(__pyx_v_index); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_index); - __pyx_v_tup = __pyx_t_3; - __pyx_t_3 = 0; - - /* "View.MemoryView":671 - * full slices. - * """ - * if not isinstance(index, tuple): # <<<<<<<<<<<<<< - * tup = (index,) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":674 - * tup = (index,) - * else: - * tup = index # <<<<<<<<<<<<<< - * - * result = [] - */ - /*else*/ { - __Pyx_INCREF(__pyx_v_index); - __pyx_v_tup = __pyx_v_index; - } - __pyx_L3:; - - /* "View.MemoryView":676 - * tup = index - * - * result = [] # <<<<<<<<<<<<<< - * have_slices = False - * seen_ellipsis = False - */ - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 676, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_result = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":677 - * - * result = [] - * have_slices = False # <<<<<<<<<<<<<< - * seen_ellipsis = False - * for idx, item in enumerate(tup): - */ - __pyx_v_have_slices = 0; - - /* "View.MemoryView":678 - * result = [] - * have_slices = False - * seen_ellipsis = False # <<<<<<<<<<<<<< - * for idx, item in enumerate(tup): - * if item is Ellipsis: - */ - __pyx_v_seen_ellipsis = 0; - - /* "View.MemoryView":679 - * have_slices = False - * seen_ellipsis = False - * for idx, item in enumerate(tup): # <<<<<<<<<<<<<< - * if item is Ellipsis: - * if not seen_ellipsis: - */ - __Pyx_INCREF(__pyx_int_0); - __pyx_t_3 = __pyx_int_0; - if (likely(PyList_CheckExact(__pyx_v_tup)) || PyTuple_CheckExact(__pyx_v_tup)) { - __pyx_t_4 = __pyx_v_tup; __Pyx_INCREF(__pyx_t_4); __pyx_t_5 = 0; - __pyx_t_6 = NULL; - } else { - __pyx_t_5 = -1; __pyx_t_4 = PyObject_GetIter(__pyx_v_tup); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_6 = Py_TYPE(__pyx_t_4)->tp_iternext; if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 679, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_6)) { - if (likely(PyList_CheckExact(__pyx_t_4))) { - if (__pyx_t_5 >= PyList_GET_SIZE(__pyx_t_4)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_7 = PyList_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 679, __pyx_L1_error) - #else - __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - } else { - if (__pyx_t_5 >= PyTuple_GET_SIZE(__pyx_t_4)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_7 = PyTuple_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 679, __pyx_L1_error) - #else - __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - } - } else { - __pyx_t_7 = __pyx_t_6(__pyx_t_4); - if (unlikely(!__pyx_t_7)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 679, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_7); - } - __Pyx_XDECREF_SET(__pyx_v_item, __pyx_t_7); - __pyx_t_7 = 0; - __Pyx_INCREF(__pyx_t_3); - __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_3); - __pyx_t_7 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 679, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); - __pyx_t_3 = __pyx_t_7; - __pyx_t_7 = 0; - - /* "View.MemoryView":680 - * seen_ellipsis = False - * for idx, item in enumerate(tup): - * if item is Ellipsis: # <<<<<<<<<<<<<< - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - */ - __pyx_t_2 = (__pyx_v_item == __pyx_builtin_Ellipsis); - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":681 - * for idx, item in enumerate(tup): - * if item is Ellipsis: - * if not seen_ellipsis: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True - */ - __pyx_t_1 = ((!(__pyx_v_seen_ellipsis != 0)) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":682 - * if item is Ellipsis: - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) # <<<<<<<<<<<<<< - * seen_ellipsis = True - * else: - */ - __pyx_t_8 = PyObject_Length(__pyx_v_tup); if (unlikely(__pyx_t_8 == ((Py_ssize_t)-1))) __PYX_ERR(1, 682, __pyx_L1_error) - __pyx_t_7 = PyList_New(1 * ((((__pyx_v_ndim - __pyx_t_8) + 1)<0) ? 0:((__pyx_v_ndim - __pyx_t_8) + 1))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 682, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - { Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < ((__pyx_v_ndim - __pyx_t_8) + 1); __pyx_temp++) { - __Pyx_INCREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - PyList_SET_ITEM(__pyx_t_7, __pyx_temp, __pyx_slice__16); - } - } - __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_7); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 682, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "View.MemoryView":683 - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True # <<<<<<<<<<<<<< - * else: - * result.append(slice(None)) - */ - __pyx_v_seen_ellipsis = 1; - - /* "View.MemoryView":681 - * for idx, item in enumerate(tup): - * if item is Ellipsis: - * if not seen_ellipsis: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True - */ - goto __pyx_L7; - } - - /* "View.MemoryView":685 - * seen_ellipsis = True - * else: - * result.append(slice(None)) # <<<<<<<<<<<<<< - * have_slices = True - * else: - */ - /*else*/ { - __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_slice__16); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 685, __pyx_L1_error) - } - __pyx_L7:; - - /* "View.MemoryView":686 - * else: - * result.append(slice(None)) - * have_slices = True # <<<<<<<<<<<<<< - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): - */ - __pyx_v_have_slices = 1; - - /* "View.MemoryView":680 - * seen_ellipsis = False - * for idx, item in enumerate(tup): - * if item is Ellipsis: # <<<<<<<<<<<<<< - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - */ - goto __pyx_L6; - } - - /* "View.MemoryView":688 - * have_slices = True - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<< - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - */ - /*else*/ { - __pyx_t_2 = PySlice_Check(__pyx_v_item); - __pyx_t_10 = ((!(__pyx_t_2 != 0)) != 0); - if (__pyx_t_10) { - } else { - __pyx_t_1 = __pyx_t_10; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_10 = ((!(PyIndex_Check(__pyx_v_item) != 0)) != 0); - __pyx_t_1 = __pyx_t_10; - __pyx_L9_bool_binop_done:; - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":689 - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): - * raise TypeError("Cannot index with type '%s'" % type(item)) # <<<<<<<<<<<<<< - * - * have_slices = have_slices or isinstance(item, slice) - */ - __pyx_t_7 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Cannot_index_with_type_s, ((PyObject *)Py_TYPE(__pyx_v_item))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 689, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_builtin_TypeError, __pyx_t_7); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 689, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_Raise(__pyx_t_11, 0, 0, 0); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __PYX_ERR(1, 689, __pyx_L1_error) - - /* "View.MemoryView":688 - * have_slices = True - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<< - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - */ - } - - /* "View.MemoryView":691 - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - * have_slices = have_slices or isinstance(item, slice) # <<<<<<<<<<<<<< - * result.append(item) - * - */ - __pyx_t_10 = (__pyx_v_have_slices != 0); - if (!__pyx_t_10) { - } else { - __pyx_t_1 = __pyx_t_10; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_10 = PySlice_Check(__pyx_v_item); - __pyx_t_2 = (__pyx_t_10 != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L11_bool_binop_done:; - __pyx_v_have_slices = __pyx_t_1; - - /* "View.MemoryView":692 - * - * have_slices = have_slices or isinstance(item, slice) - * result.append(item) # <<<<<<<<<<<<<< - * - * nslices = ndim - len(result) - */ - __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_v_item); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 692, __pyx_L1_error) - } - __pyx_L6:; - - /* "View.MemoryView":679 - * have_slices = False - * seen_ellipsis = False - * for idx, item in enumerate(tup): # <<<<<<<<<<<<<< - * if item is Ellipsis: - * if not seen_ellipsis: - */ - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":694 - * result.append(item) - * - * nslices = ndim - len(result) # <<<<<<<<<<<<<< - * if nslices: - * result.extend([slice(None)] * nslices) - */ - __pyx_t_5 = PyList_GET_SIZE(__pyx_v_result); if (unlikely(__pyx_t_5 == ((Py_ssize_t)-1))) __PYX_ERR(1, 694, __pyx_L1_error) - __pyx_v_nslices = (__pyx_v_ndim - __pyx_t_5); - - /* "View.MemoryView":695 - * - * nslices = ndim - len(result) - * if nslices: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * nslices) - * - */ - __pyx_t_1 = (__pyx_v_nslices != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":696 - * nslices = ndim - len(result) - * if nslices: - * result.extend([slice(None)] * nslices) # <<<<<<<<<<<<<< - * - * return have_slices or nslices, tuple(result) - */ - __pyx_t_3 = PyList_New(1 * ((__pyx_v_nslices<0) ? 0:__pyx_v_nslices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 696, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - { Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < __pyx_v_nslices; __pyx_temp++) { - __Pyx_INCREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - PyList_SET_ITEM(__pyx_t_3, __pyx_temp, __pyx_slice__16); - } - } - __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_3); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 696, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":695 - * - * nslices = ndim - len(result) - * if nslices: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * nslices) - * - */ - } - - /* "View.MemoryView":698 - * result.extend([slice(None)] * nslices) - * - * return have_slices or nslices, tuple(result) # <<<<<<<<<<<<<< - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - */ - __Pyx_XDECREF(__pyx_r); - if (!__pyx_v_have_slices) { - } else { - __pyx_t_4 = __Pyx_PyBool_FromLong(__pyx_v_have_slices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L14_bool_binop_done; - } - __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_nslices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __pyx_t_4; - __pyx_t_4 = 0; - __pyx_L14_bool_binop_done:; - __pyx_t_4 = PyList_AsTuple(__pyx_v_result); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_11 = PyTuple_New(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_t_4); - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_r = ((PyObject*)__pyx_t_11); - __pyx_t_11 = 0; - goto __pyx_L0; - - /* "View.MemoryView":666 - * return isinstance(o, memoryview) - * - * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<< - * """ - * Replace all ellipses with full slices and fill incomplete indices with - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_AddTraceback("View.MemoryView._unellipsify", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_tup); - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XDECREF(__pyx_v_idx); - __Pyx_XDECREF(__pyx_v_item); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":700 - * return have_slices or nslices, tuple(result) - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<< - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - */ - -static PyObject *assert_direct_dimensions(Py_ssize_t *__pyx_v_suboffsets, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - Py_ssize_t *__pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assert_direct_dimensions", 0); - - /* "View.MemoryView":701 - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: # <<<<<<<<<<<<<< - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") - */ - __pyx_t_2 = (__pyx_v_suboffsets + __pyx_v_ndim); - for (__pyx_t_3 = __pyx_v_suboffsets; __pyx_t_3 < __pyx_t_2; __pyx_t_3++) { - __pyx_t_1 = __pyx_t_3; - __pyx_v_suboffset = (__pyx_t_1[0]); - - /* "View.MemoryView":702 - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * raise ValueError("Indirect dimensions not supported") - * - */ - __pyx_t_4 = ((__pyx_v_suboffset >= 0) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":703 - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__17, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 703, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_Raise(__pyx_t_5, 0, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __PYX_ERR(1, 703, __pyx_L1_error) - - /* "View.MemoryView":702 - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * raise ValueError("Indirect dimensions not supported") - * - */ - } - } - - /* "View.MemoryView":700 - * return have_slices or nslices, tuple(result) - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<< - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.assert_direct_dimensions", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":710 - * - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<< - * cdef int new_ndim = 0, suboffset_dim = -1, dim - * cdef bint negative_step - */ - -static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *__pyx_v_memview, PyObject *__pyx_v_indices) { - int __pyx_v_new_ndim; - int __pyx_v_suboffset_dim; - int __pyx_v_dim; - __Pyx_memviewslice __pyx_v_src; - __Pyx_memviewslice __pyx_v_dst; - __Pyx_memviewslice *__pyx_v_p_src; - struct __pyx_memoryviewslice_obj *__pyx_v_memviewsliceobj = 0; - __Pyx_memviewslice *__pyx_v_p_dst; - int *__pyx_v_p_suboffset_dim; - Py_ssize_t __pyx_v_start; - Py_ssize_t __pyx_v_stop; - Py_ssize_t __pyx_v_step; - int __pyx_v_have_start; - int __pyx_v_have_stop; - int __pyx_v_have_step; - PyObject *__pyx_v_index = NULL; - struct __pyx_memoryview_obj *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - struct __pyx_memoryview_obj *__pyx_t_4; - char *__pyx_t_5; - int __pyx_t_6; - Py_ssize_t __pyx_t_7; - PyObject *(*__pyx_t_8)(PyObject *); - PyObject *__pyx_t_9 = NULL; - Py_ssize_t __pyx_t_10; - int __pyx_t_11; - Py_ssize_t __pyx_t_12; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memview_slice", 0); - - /* "View.MemoryView":711 - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): - * cdef int new_ndim = 0, suboffset_dim = -1, dim # <<<<<<<<<<<<<< - * cdef bint negative_step - * cdef __Pyx_memviewslice src, dst - */ - __pyx_v_new_ndim = 0; - __pyx_v_suboffset_dim = -1; - - /* "View.MemoryView":718 - * - * - * memset(&dst, 0, sizeof(dst)) # <<<<<<<<<<<<<< - * - * cdef _memoryviewslice memviewsliceobj - */ - (void)(memset((&__pyx_v_dst), 0, (sizeof(__pyx_v_dst)))); - - /* "View.MemoryView":722 - * cdef _memoryviewslice memviewsliceobj - * - * assert memview.view.ndim > 0 # <<<<<<<<<<<<<< - * - * if isinstance(memview, _memoryviewslice): - */ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(!Py_OptimizeFlag)) { - if (unlikely(!((__pyx_v_memview->view.ndim > 0) != 0))) { - PyErr_SetNone(PyExc_AssertionError); - __PYX_ERR(1, 722, __pyx_L1_error) - } - } - #endif - - /* "View.MemoryView":724 - * assert memview.view.ndim > 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":725 - * - * if isinstance(memview, _memoryviewslice): - * memviewsliceobj = memview # <<<<<<<<<<<<<< - * p_src = &memviewsliceobj.from_slice - * else: - */ - if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 725, __pyx_L1_error) - __pyx_t_3 = ((PyObject *)__pyx_v_memview); - __Pyx_INCREF(__pyx_t_3); - __pyx_v_memviewsliceobj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":726 - * if isinstance(memview, _memoryviewslice): - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice # <<<<<<<<<<<<<< - * else: - * slice_copy(memview, &src) - */ - __pyx_v_p_src = (&__pyx_v_memviewsliceobj->from_slice); - - /* "View.MemoryView":724 - * assert memview.view.ndim > 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice - */ - goto __pyx_L3; - } - - /* "View.MemoryView":728 - * p_src = &memviewsliceobj.from_slice - * else: - * slice_copy(memview, &src) # <<<<<<<<<<<<<< - * p_src = &src - * - */ - /*else*/ { - __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_src)); - - /* "View.MemoryView":729 - * else: - * slice_copy(memview, &src) - * p_src = &src # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_p_src = (&__pyx_v_src); - } - __pyx_L3:; - - /* "View.MemoryView":735 - * - * - * dst.memview = p_src.memview # <<<<<<<<<<<<<< - * dst.data = p_src.data - * - */ - __pyx_t_4 = __pyx_v_p_src->memview; - __pyx_v_dst.memview = __pyx_t_4; - - /* "View.MemoryView":736 - * - * dst.memview = p_src.memview - * dst.data = p_src.data # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __pyx_v_p_src->data; - __pyx_v_dst.data = __pyx_t_5; - - /* "View.MemoryView":741 - * - * - * cdef __Pyx_memviewslice *p_dst = &dst # <<<<<<<<<<<<<< - * cdef int *p_suboffset_dim = &suboffset_dim - * cdef Py_ssize_t start, stop, step - */ - __pyx_v_p_dst = (&__pyx_v_dst); - - /* "View.MemoryView":742 - * - * cdef __Pyx_memviewslice *p_dst = &dst - * cdef int *p_suboffset_dim = &suboffset_dim # <<<<<<<<<<<<<< - * cdef Py_ssize_t start, stop, step - * cdef bint have_start, have_stop, have_step - */ - __pyx_v_p_suboffset_dim = (&__pyx_v_suboffset_dim); - - /* "View.MemoryView":746 - * cdef bint have_start, have_stop, have_step - * - * for dim, index in enumerate(indices): # <<<<<<<<<<<<<< - * if PyIndex_Check(index): - * slice_memviewslice( - */ - __pyx_t_6 = 0; - if (likely(PyList_CheckExact(__pyx_v_indices)) || PyTuple_CheckExact(__pyx_v_indices)) { - __pyx_t_3 = __pyx_v_indices; __Pyx_INCREF(__pyx_t_3); __pyx_t_7 = 0; - __pyx_t_8 = NULL; - } else { - __pyx_t_7 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_v_indices); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 746, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = Py_TYPE(__pyx_t_3)->tp_iternext; if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 746, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_8)) { - if (likely(PyList_CheckExact(__pyx_t_3))) { - if (__pyx_t_7 >= PyList_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_9 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 746, __pyx_L1_error) - #else - __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 746, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - #endif - } else { - if (__pyx_t_7 >= PyTuple_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_9 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 746, __pyx_L1_error) - #else - __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 746, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - #endif - } - } else { - __pyx_t_9 = __pyx_t_8(__pyx_t_3); - if (unlikely(!__pyx_t_9)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 746, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_9); - } - __Pyx_XDECREF_SET(__pyx_v_index, __pyx_t_9); - __pyx_t_9 = 0; - __pyx_v_dim = __pyx_t_6; - __pyx_t_6 = (__pyx_t_6 + 1); - - /* "View.MemoryView":747 - * - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): # <<<<<<<<<<<<<< - * slice_memviewslice( - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - */ - __pyx_t_2 = (PyIndex_Check(__pyx_v_index) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":751 - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - * index, 0, 0, # start, stop, step # <<<<<<<<<<<<<< - * 0, 0, 0, # have_{start,stop,step} - * False) - */ - __pyx_t_10 = __Pyx_PyIndex_AsSsize_t(__pyx_v_index); if (unlikely((__pyx_t_10 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 751, __pyx_L1_error) - - /* "View.MemoryView":748 - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): - * slice_memviewslice( # <<<<<<<<<<<<<< - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - */ - __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_t_10, 0, 0, 0, 0, 0, 0); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 748, __pyx_L1_error) - - /* "View.MemoryView":747 - * - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): # <<<<<<<<<<<<<< - * slice_memviewslice( - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - */ - goto __pyx_L6; - } - - /* "View.MemoryView":754 - * 0, 0, 0, # have_{start,stop,step} - * False) - * elif index is None: # <<<<<<<<<<<<<< - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - */ - __pyx_t_2 = (__pyx_v_index == Py_None); - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":755 - * False) - * elif index is None: - * p_dst.shape[new_ndim] = 1 # <<<<<<<<<<<<<< - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 - */ - (__pyx_v_p_dst->shape[__pyx_v_new_ndim]) = 1; - - /* "View.MemoryView":756 - * elif index is None: - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 # <<<<<<<<<<<<<< - * p_dst.suboffsets[new_ndim] = -1 - * new_ndim += 1 - */ - (__pyx_v_p_dst->strides[__pyx_v_new_ndim]) = 0; - - /* "View.MemoryView":757 - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 # <<<<<<<<<<<<<< - * new_ndim += 1 - * else: - */ - (__pyx_v_p_dst->suboffsets[__pyx_v_new_ndim]) = -1L; - - /* "View.MemoryView":758 - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 - * new_ndim += 1 # <<<<<<<<<<<<<< - * else: - * start = index.start or 0 - */ - __pyx_v_new_ndim = (__pyx_v_new_ndim + 1); - - /* "View.MemoryView":754 - * 0, 0, 0, # have_{start,stop,step} - * False) - * elif index is None: # <<<<<<<<<<<<<< - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - */ - goto __pyx_L6; - } - - /* "View.MemoryView":760 - * new_ndim += 1 - * else: - * start = index.start or 0 # <<<<<<<<<<<<<< - * stop = index.stop or 0 - * step = index.step or 0 - */ - /*else*/ { - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 760, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 760, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 760, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L7_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L7_bool_binop_done:; - __pyx_v_start = __pyx_t_10; - - /* "View.MemoryView":761 - * else: - * start = index.start or 0 - * stop = index.stop or 0 # <<<<<<<<<<<<<< - * step = index.step or 0 - * - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 761, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 761, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 761, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L9_bool_binop_done:; - __pyx_v_stop = __pyx_t_10; - - /* "View.MemoryView":762 - * start = index.start or 0 - * stop = index.stop or 0 - * step = index.step or 0 # <<<<<<<<<<<<<< - * - * have_start = index.start is not None - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 762, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 762, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 762, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L11_bool_binop_done:; - __pyx_v_step = __pyx_t_10; - - /* "View.MemoryView":764 - * step = index.step or 0 - * - * have_start = index.start is not None # <<<<<<<<<<<<<< - * have_stop = index.stop is not None - * have_step = index.step is not None - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 764, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_start = __pyx_t_1; - - /* "View.MemoryView":765 - * - * have_start = index.start is not None - * have_stop = index.stop is not None # <<<<<<<<<<<<<< - * have_step = index.step is not None - * - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 765, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_stop = __pyx_t_1; - - /* "View.MemoryView":766 - * have_start = index.start is not None - * have_stop = index.stop is not None - * have_step = index.step is not None # <<<<<<<<<<<<<< - * - * slice_memviewslice( - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 766, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_step = __pyx_t_1; - - /* "View.MemoryView":768 - * have_step = index.step is not None - * - * slice_memviewslice( # <<<<<<<<<<<<<< - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - */ - __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_v_start, __pyx_v_stop, __pyx_v_step, __pyx_v_have_start, __pyx_v_have_stop, __pyx_v_have_step, 1); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 768, __pyx_L1_error) - - /* "View.MemoryView":774 - * have_start, have_stop, have_step, - * True) - * new_ndim += 1 # <<<<<<<<<<<<<< - * - * if isinstance(memview, _memoryviewslice): - */ - __pyx_v_new_ndim = (__pyx_v_new_ndim + 1); - } - __pyx_L6:; - - /* "View.MemoryView":746 - * cdef bint have_start, have_stop, have_step - * - * for dim, index in enumerate(indices): # <<<<<<<<<<<<<< - * if PyIndex_Check(index): - * slice_memviewslice( - */ - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":776 - * new_ndim += 1 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":777 - * - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<< - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, - */ - __Pyx_XDECREF(((PyObject *)__pyx_r)); - - /* "View.MemoryView":778 - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, # <<<<<<<<<<<<<< - * memviewsliceobj.to_dtype_func, - * memview.dtype_is_object) - */ - if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 778, __pyx_L1_error) } - - /* "View.MemoryView":779 - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * else: - */ - if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 779, __pyx_L1_error) } - - /* "View.MemoryView":777 - * - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<< - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, - */ - __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, __pyx_v_memviewsliceobj->to_object_func, __pyx_v_memviewsliceobj->to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 777, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 777, __pyx_L1_error) - __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3); - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":776 - * new_ndim += 1 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - */ - } - - /* "View.MemoryView":782 - * memview.dtype_is_object) - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * - */ - /*else*/ { - __Pyx_XDECREF(((PyObject *)__pyx_r)); - - /* "View.MemoryView":783 - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, - * memview.dtype_is_object) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, NULL, NULL, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 782, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "View.MemoryView":782 - * memview.dtype_is_object) - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * - */ - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 782, __pyx_L1_error) - __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3); - __pyx_t_3 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":710 - * - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<< - * cdef int new_ndim = 0, suboffset_dim = -1, dim - * cdef bint negative_step - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("View.MemoryView.memview_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_memviewsliceobj); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_XGIVEREF((PyObject *)__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":807 - * - * @cname('__pyx_memoryview_slice_memviewslice') - * cdef int slice_memviewslice( # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset, - */ - -static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *__pyx_v_dst, Py_ssize_t __pyx_v_shape, Py_ssize_t __pyx_v_stride, Py_ssize_t __pyx_v_suboffset, int __pyx_v_dim, int __pyx_v_new_ndim, int *__pyx_v_suboffset_dim, Py_ssize_t __pyx_v_start, Py_ssize_t __pyx_v_stop, Py_ssize_t __pyx_v_step, int __pyx_v_have_start, int __pyx_v_have_stop, int __pyx_v_have_step, int __pyx_v_is_slice) { - Py_ssize_t __pyx_v_new_shape; - int __pyx_v_negative_step; - int __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":827 - * cdef bint negative_step - * - * if not is_slice: # <<<<<<<<<<<<<< - * - * if start < 0: - */ - __pyx_t_1 = ((!(__pyx_v_is_slice != 0)) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":829 - * if not is_slice: - * - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if not 0 <= start < shape: - */ - __pyx_t_1 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":830 - * - * if start < 0: - * start += shape # <<<<<<<<<<<<<< - * if not 0 <= start < shape: - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - */ - __pyx_v_start = (__pyx_v_start + __pyx_v_shape); - - /* "View.MemoryView":829 - * if not is_slice: - * - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if not 0 <= start < shape: - */ - } - - /* "View.MemoryView":831 - * if start < 0: - * start += shape - * if not 0 <= start < shape: # <<<<<<<<<<<<<< - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - * else: - */ - __pyx_t_1 = (0 <= __pyx_v_start); - if (__pyx_t_1) { - __pyx_t_1 = (__pyx_v_start < __pyx_v_shape); - } - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":832 - * start += shape - * if not 0 <= start < shape: - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) # <<<<<<<<<<<<<< - * else: - * - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"Index out of bounds (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 832, __pyx_L1_error) - - /* "View.MemoryView":831 - * if start < 0: - * start += shape - * if not 0 <= start < shape: # <<<<<<<<<<<<<< - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - * else: - */ - } - - /* "View.MemoryView":827 - * cdef bint negative_step - * - * if not is_slice: # <<<<<<<<<<<<<< - * - * if start < 0: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":835 - * else: - * - * negative_step = have_step != 0 and step < 0 # <<<<<<<<<<<<<< - * - * if have_step and step == 0: - */ - /*else*/ { - __pyx_t_1 = ((__pyx_v_have_step != 0) != 0); - if (__pyx_t_1) { - } else { - __pyx_t_2 = __pyx_t_1; - goto __pyx_L6_bool_binop_done; - } - __pyx_t_1 = ((__pyx_v_step < 0) != 0); - __pyx_t_2 = __pyx_t_1; - __pyx_L6_bool_binop_done:; - __pyx_v_negative_step = __pyx_t_2; - - /* "View.MemoryView":837 - * negative_step = have_step != 0 and step < 0 - * - * if have_step and step == 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) - * - */ - __pyx_t_1 = (__pyx_v_have_step != 0); - if (__pyx_t_1) { - } else { - __pyx_t_2 = __pyx_t_1; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_1 = ((__pyx_v_step == 0) != 0); - __pyx_t_2 = __pyx_t_1; - __pyx_L9_bool_binop_done:; - if (__pyx_t_2) { - - /* "View.MemoryView":838 - * - * if have_step and step == 0: - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_ValueError, ((char *)"Step may not be zero (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 838, __pyx_L1_error) - - /* "View.MemoryView":837 - * negative_step = have_step != 0 and step < 0 - * - * if have_step and step == 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) - * - */ - } - - /* "View.MemoryView":841 - * - * - * if have_start: # <<<<<<<<<<<<<< - * if start < 0: - * start += shape - */ - __pyx_t_2 = (__pyx_v_have_start != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":842 - * - * if have_start: - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if start < 0: - */ - __pyx_t_2 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":843 - * if have_start: - * if start < 0: - * start += shape # <<<<<<<<<<<<<< - * if start < 0: - * start = 0 - */ - __pyx_v_start = (__pyx_v_start + __pyx_v_shape); - - /* "View.MemoryView":844 - * if start < 0: - * start += shape - * if start < 0: # <<<<<<<<<<<<<< - * start = 0 - * elif start >= shape: - */ - __pyx_t_2 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":845 - * start += shape - * if start < 0: - * start = 0 # <<<<<<<<<<<<<< - * elif start >= shape: - * if negative_step: - */ - __pyx_v_start = 0; - - /* "View.MemoryView":844 - * if start < 0: - * start += shape - * if start < 0: # <<<<<<<<<<<<<< - * start = 0 - * elif start >= shape: - */ - } - - /* "View.MemoryView":842 - * - * if have_start: - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if start < 0: - */ - goto __pyx_L12; - } - - /* "View.MemoryView":846 - * if start < 0: - * start = 0 - * elif start >= shape: # <<<<<<<<<<<<<< - * if negative_step: - * start = shape - 1 - */ - __pyx_t_2 = ((__pyx_v_start >= __pyx_v_shape) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":847 - * start = 0 - * elif start >= shape: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":848 - * elif start >= shape: - * if negative_step: - * start = shape - 1 # <<<<<<<<<<<<<< - * else: - * start = shape - */ - __pyx_v_start = (__pyx_v_shape - 1); - - /* "View.MemoryView":847 - * start = 0 - * elif start >= shape: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - goto __pyx_L14; - } - - /* "View.MemoryView":850 - * start = shape - 1 - * else: - * start = shape # <<<<<<<<<<<<<< - * else: - * if negative_step: - */ - /*else*/ { - __pyx_v_start = __pyx_v_shape; - } - __pyx_L14:; - - /* "View.MemoryView":846 - * if start < 0: - * start = 0 - * elif start >= shape: # <<<<<<<<<<<<<< - * if negative_step: - * start = shape - 1 - */ - } - __pyx_L12:; - - /* "View.MemoryView":841 - * - * - * if have_start: # <<<<<<<<<<<<<< - * if start < 0: - * start += shape - */ - goto __pyx_L11; - } - - /* "View.MemoryView":852 - * start = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - /*else*/ { - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":853 - * else: - * if negative_step: - * start = shape - 1 # <<<<<<<<<<<<<< - * else: - * start = 0 - */ - __pyx_v_start = (__pyx_v_shape - 1); - - /* "View.MemoryView":852 - * start = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - goto __pyx_L15; - } - - /* "View.MemoryView":855 - * start = shape - 1 - * else: - * start = 0 # <<<<<<<<<<<<<< - * - * if have_stop: - */ - /*else*/ { - __pyx_v_start = 0; - } - __pyx_L15:; - } - __pyx_L11:; - - /* "View.MemoryView":857 - * start = 0 - * - * if have_stop: # <<<<<<<<<<<<<< - * if stop < 0: - * stop += shape - */ - __pyx_t_2 = (__pyx_v_have_stop != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":858 - * - * if have_stop: - * if stop < 0: # <<<<<<<<<<<<<< - * stop += shape - * if stop < 0: - */ - __pyx_t_2 = ((__pyx_v_stop < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":859 - * if have_stop: - * if stop < 0: - * stop += shape # <<<<<<<<<<<<<< - * if stop < 0: - * stop = 0 - */ - __pyx_v_stop = (__pyx_v_stop + __pyx_v_shape); - - /* "View.MemoryView":860 - * if stop < 0: - * stop += shape - * if stop < 0: # <<<<<<<<<<<<<< - * stop = 0 - * elif stop > shape: - */ - __pyx_t_2 = ((__pyx_v_stop < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":861 - * stop += shape - * if stop < 0: - * stop = 0 # <<<<<<<<<<<<<< - * elif stop > shape: - * stop = shape - */ - __pyx_v_stop = 0; - - /* "View.MemoryView":860 - * if stop < 0: - * stop += shape - * if stop < 0: # <<<<<<<<<<<<<< - * stop = 0 - * elif stop > shape: - */ - } - - /* "View.MemoryView":858 - * - * if have_stop: - * if stop < 0: # <<<<<<<<<<<<<< - * stop += shape - * if stop < 0: - */ - goto __pyx_L17; - } - - /* "View.MemoryView":862 - * if stop < 0: - * stop = 0 - * elif stop > shape: # <<<<<<<<<<<<<< - * stop = shape - * else: - */ - __pyx_t_2 = ((__pyx_v_stop > __pyx_v_shape) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":863 - * stop = 0 - * elif stop > shape: - * stop = shape # <<<<<<<<<<<<<< - * else: - * if negative_step: - */ - __pyx_v_stop = __pyx_v_shape; - - /* "View.MemoryView":862 - * if stop < 0: - * stop = 0 - * elif stop > shape: # <<<<<<<<<<<<<< - * stop = shape - * else: - */ - } - __pyx_L17:; - - /* "View.MemoryView":857 - * start = 0 - * - * if have_stop: # <<<<<<<<<<<<<< - * if stop < 0: - * stop += shape - */ - goto __pyx_L16; - } - - /* "View.MemoryView":865 - * stop = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * stop = -1 - * else: - */ - /*else*/ { - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":866 - * else: - * if negative_step: - * stop = -1 # <<<<<<<<<<<<<< - * else: - * stop = shape - */ - __pyx_v_stop = -1L; - - /* "View.MemoryView":865 - * stop = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * stop = -1 - * else: - */ - goto __pyx_L19; - } - - /* "View.MemoryView":868 - * stop = -1 - * else: - * stop = shape # <<<<<<<<<<<<<< - * - * if not have_step: - */ - /*else*/ { - __pyx_v_stop = __pyx_v_shape; - } - __pyx_L19:; - } - __pyx_L16:; - - /* "View.MemoryView":870 - * stop = shape - * - * if not have_step: # <<<<<<<<<<<<<< - * step = 1 - * - */ - __pyx_t_2 = ((!(__pyx_v_have_step != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":871 - * - * if not have_step: - * step = 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_step = 1; - - /* "View.MemoryView":870 - * stop = shape - * - * if not have_step: # <<<<<<<<<<<<<< - * step = 1 - * - */ - } - - /* "View.MemoryView":875 - * - * with cython.cdivision(True): - * new_shape = (stop - start) // step # <<<<<<<<<<<<<< - * - * if (stop - start) - step * new_shape: - */ - __pyx_v_new_shape = ((__pyx_v_stop - __pyx_v_start) / __pyx_v_step); - - /* "View.MemoryView":877 - * new_shape = (stop - start) // step - * - * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<< - * new_shape += 1 - * - */ - __pyx_t_2 = (((__pyx_v_stop - __pyx_v_start) - (__pyx_v_step * __pyx_v_new_shape)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":878 - * - * if (stop - start) - step * new_shape: - * new_shape += 1 # <<<<<<<<<<<<<< - * - * if new_shape < 0: - */ - __pyx_v_new_shape = (__pyx_v_new_shape + 1); - - /* "View.MemoryView":877 - * new_shape = (stop - start) // step - * - * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<< - * new_shape += 1 - * - */ - } - - /* "View.MemoryView":880 - * new_shape += 1 - * - * if new_shape < 0: # <<<<<<<<<<<<<< - * new_shape = 0 - * - */ - __pyx_t_2 = ((__pyx_v_new_shape < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":881 - * - * if new_shape < 0: - * new_shape = 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_new_shape = 0; - - /* "View.MemoryView":880 - * new_shape += 1 - * - * if new_shape < 0: # <<<<<<<<<<<<<< - * new_shape = 0 - * - */ - } - - /* "View.MemoryView":884 - * - * - * dst.strides[new_ndim] = stride * step # <<<<<<<<<<<<<< - * dst.shape[new_ndim] = new_shape - * dst.suboffsets[new_ndim] = suboffset - */ - (__pyx_v_dst->strides[__pyx_v_new_ndim]) = (__pyx_v_stride * __pyx_v_step); - - /* "View.MemoryView":885 - * - * dst.strides[new_ndim] = stride * step - * dst.shape[new_ndim] = new_shape # <<<<<<<<<<<<<< - * dst.suboffsets[new_ndim] = suboffset - * - */ - (__pyx_v_dst->shape[__pyx_v_new_ndim]) = __pyx_v_new_shape; - - /* "View.MemoryView":886 - * dst.strides[new_ndim] = stride * step - * dst.shape[new_ndim] = new_shape - * dst.suboffsets[new_ndim] = suboffset # <<<<<<<<<<<<<< - * - * - */ - (__pyx_v_dst->suboffsets[__pyx_v_new_ndim]) = __pyx_v_suboffset; - } - __pyx_L3:; - - /* "View.MemoryView":889 - * - * - * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<< - * dst.data += start * stride - * else: - */ - __pyx_t_2 = (((__pyx_v_suboffset_dim[0]) < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":890 - * - * if suboffset_dim[0] < 0: - * dst.data += start * stride # <<<<<<<<<<<<<< - * else: - * dst.suboffsets[suboffset_dim[0]] += start * stride - */ - __pyx_v_dst->data = (__pyx_v_dst->data + (__pyx_v_start * __pyx_v_stride)); - - /* "View.MemoryView":889 - * - * - * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<< - * dst.data += start * stride - * else: - */ - goto __pyx_L23; - } - - /* "View.MemoryView":892 - * dst.data += start * stride - * else: - * dst.suboffsets[suboffset_dim[0]] += start * stride # <<<<<<<<<<<<<< - * - * if suboffset >= 0: - */ - /*else*/ { - __pyx_t_3 = (__pyx_v_suboffset_dim[0]); - (__pyx_v_dst->suboffsets[__pyx_t_3]) = ((__pyx_v_dst->suboffsets[__pyx_t_3]) + (__pyx_v_start * __pyx_v_stride)); - } - __pyx_L23:; - - /* "View.MemoryView":894 - * dst.suboffsets[suboffset_dim[0]] += start * stride - * - * if suboffset >= 0: # <<<<<<<<<<<<<< - * if not is_slice: - * if new_ndim == 0: - */ - __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":895 - * - * if suboffset >= 0: - * if not is_slice: # <<<<<<<<<<<<<< - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset - */ - __pyx_t_2 = ((!(__pyx_v_is_slice != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":896 - * if suboffset >= 0: - * if not is_slice: - * if new_ndim == 0: # <<<<<<<<<<<<<< - * dst.data = ( dst.data)[0] + suboffset - * else: - */ - __pyx_t_2 = ((__pyx_v_new_ndim == 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":897 - * if not is_slice: - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset # <<<<<<<<<<<<<< - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " - */ - __pyx_v_dst->data = ((((char **)__pyx_v_dst->data)[0]) + __pyx_v_suboffset); - - /* "View.MemoryView":896 - * if suboffset >= 0: - * if not is_slice: - * if new_ndim == 0: # <<<<<<<<<<<<<< - * dst.data = ( dst.data)[0] + suboffset - * else: - */ - goto __pyx_L26; - } - - /* "View.MemoryView":899 - * dst.data = ( dst.data)[0] + suboffset - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " # <<<<<<<<<<<<<< - * "must be indexed and not sliced", dim) - * else: - */ - /*else*/ { - - /* "View.MemoryView":900 - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " - * "must be indexed and not sliced", dim) # <<<<<<<<<<<<<< - * else: - * suboffset_dim[0] = new_ndim - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"All dimensions preceding dimension %d must be indexed and not sliced"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 899, __pyx_L1_error) - } - __pyx_L26:; - - /* "View.MemoryView":895 - * - * if suboffset >= 0: - * if not is_slice: # <<<<<<<<<<<<<< - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset - */ - goto __pyx_L25; - } - - /* "View.MemoryView":902 - * "must be indexed and not sliced", dim) - * else: - * suboffset_dim[0] = new_ndim # <<<<<<<<<<<<<< - * - * return 0 - */ - /*else*/ { - (__pyx_v_suboffset_dim[0]) = __pyx_v_new_ndim; - } - __pyx_L25:; - - /* "View.MemoryView":894 - * dst.suboffsets[suboffset_dim[0]] += start * stride - * - * if suboffset >= 0: # <<<<<<<<<<<<<< - * if not is_slice: - * if new_ndim == 0: - */ - } - - /* "View.MemoryView":904 - * suboffset_dim[0] = new_ndim - * - * return 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":807 - * - * @cname('__pyx_memoryview_slice_memviewslice') - * cdef int slice_memviewslice( # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.slice_memviewslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = -1; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":910 - * - * @cname('__pyx_pybuffer_index') - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<< - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - */ - -static char *__pyx_pybuffer_index(Py_buffer *__pyx_v_view, char *__pyx_v_bufp, Py_ssize_t __pyx_v_index, Py_ssize_t __pyx_v_dim) { - Py_ssize_t __pyx_v_shape; - Py_ssize_t __pyx_v_stride; - Py_ssize_t __pyx_v_suboffset; - Py_ssize_t __pyx_v_itemsize; - char *__pyx_v_resultp; - char *__pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("pybuffer_index", 0); - - /* "View.MemoryView":912 - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 # <<<<<<<<<<<<<< - * cdef Py_ssize_t itemsize = view.itemsize - * cdef char *resultp - */ - __pyx_v_suboffset = -1L; - - /* "View.MemoryView":913 - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - * cdef Py_ssize_t itemsize = view.itemsize # <<<<<<<<<<<<<< - * cdef char *resultp - * - */ - __pyx_t_1 = __pyx_v_view->itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":916 - * cdef char *resultp - * - * if view.ndim == 0: # <<<<<<<<<<<<<< - * shape = view.len / itemsize - * stride = itemsize - */ - __pyx_t_2 = ((__pyx_v_view->ndim == 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":917 - * - * if view.ndim == 0: - * shape = view.len / itemsize # <<<<<<<<<<<<<< - * stride = itemsize - * else: - */ - if (unlikely(__pyx_v_itemsize == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 917, __pyx_L1_error) - } - else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_view->len))) { - PyErr_SetString(PyExc_OverflowError, "value too large to perform division"); - __PYX_ERR(1, 917, __pyx_L1_error) - } - __pyx_v_shape = __Pyx_div_Py_ssize_t(__pyx_v_view->len, __pyx_v_itemsize); - - /* "View.MemoryView":918 - * if view.ndim == 0: - * shape = view.len / itemsize - * stride = itemsize # <<<<<<<<<<<<<< - * else: - * shape = view.shape[dim] - */ - __pyx_v_stride = __pyx_v_itemsize; - - /* "View.MemoryView":916 - * cdef char *resultp - * - * if view.ndim == 0: # <<<<<<<<<<<<<< - * shape = view.len / itemsize - * stride = itemsize - */ - goto __pyx_L3; - } - - /* "View.MemoryView":920 - * stride = itemsize - * else: - * shape = view.shape[dim] # <<<<<<<<<<<<<< - * stride = view.strides[dim] - * if view.suboffsets != NULL: - */ - /*else*/ { - __pyx_v_shape = (__pyx_v_view->shape[__pyx_v_dim]); - - /* "View.MemoryView":921 - * else: - * shape = view.shape[dim] - * stride = view.strides[dim] # <<<<<<<<<<<<<< - * if view.suboffsets != NULL: - * suboffset = view.suboffsets[dim] - */ - __pyx_v_stride = (__pyx_v_view->strides[__pyx_v_dim]); - - /* "View.MemoryView":922 - * shape = view.shape[dim] - * stride = view.strides[dim] - * if view.suboffsets != NULL: # <<<<<<<<<<<<<< - * suboffset = view.suboffsets[dim] - * - */ - __pyx_t_2 = ((__pyx_v_view->suboffsets != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":923 - * stride = view.strides[dim] - * if view.suboffsets != NULL: - * suboffset = view.suboffsets[dim] # <<<<<<<<<<<<<< - * - * if index < 0: - */ - __pyx_v_suboffset = (__pyx_v_view->suboffsets[__pyx_v_dim]); - - /* "View.MemoryView":922 - * shape = view.shape[dim] - * stride = view.strides[dim] - * if view.suboffsets != NULL: # <<<<<<<<<<<<<< - * suboffset = view.suboffsets[dim] - * - */ - } - } - __pyx_L3:; - - /* "View.MemoryView":925 - * suboffset = view.suboffsets[dim] - * - * if index < 0: # <<<<<<<<<<<<<< - * index += view.shape[dim] - * if index < 0: - */ - __pyx_t_2 = ((__pyx_v_index < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":926 - * - * if index < 0: - * index += view.shape[dim] # <<<<<<<<<<<<<< - * if index < 0: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - */ - __pyx_v_index = (__pyx_v_index + (__pyx_v_view->shape[__pyx_v_dim])); - - /* "View.MemoryView":927 - * if index < 0: - * index += view.shape[dim] - * if index < 0: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - __pyx_t_2 = ((__pyx_v_index < 0) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":928 - * index += view.shape[dim] - * if index < 0: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<< - * - * if index >= shape: - */ - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 928, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 928, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 928, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 928, __pyx_L1_error) - - /* "View.MemoryView":927 - * if index < 0: - * index += view.shape[dim] - * if index < 0: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - } - - /* "View.MemoryView":925 - * suboffset = view.suboffsets[dim] - * - * if index < 0: # <<<<<<<<<<<<<< - * index += view.shape[dim] - * if index < 0: - */ - } - - /* "View.MemoryView":930 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * if index >= shape: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - __pyx_t_2 = ((__pyx_v_index >= __pyx_v_shape) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":931 - * - * if index >= shape: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<< - * - * resultp = bufp + index * stride - */ - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 931, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 931, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 931, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 931, __pyx_L1_error) - - /* "View.MemoryView":930 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * if index >= shape: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - } - - /* "View.MemoryView":933 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * resultp = bufp + index * stride # <<<<<<<<<<<<<< - * if suboffset >= 0: - * resultp = ( resultp)[0] + suboffset - */ - __pyx_v_resultp = (__pyx_v_bufp + (__pyx_v_index * __pyx_v_stride)); - - /* "View.MemoryView":934 - * - * resultp = bufp + index * stride - * if suboffset >= 0: # <<<<<<<<<<<<<< - * resultp = ( resultp)[0] + suboffset - * - */ - __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":935 - * resultp = bufp + index * stride - * if suboffset >= 0: - * resultp = ( resultp)[0] + suboffset # <<<<<<<<<<<<<< - * - * return resultp - */ - __pyx_v_resultp = ((((char **)__pyx_v_resultp)[0]) + __pyx_v_suboffset); - - /* "View.MemoryView":934 - * - * resultp = bufp + index * stride - * if suboffset >= 0: # <<<<<<<<<<<<<< - * resultp = ( resultp)[0] + suboffset - * - */ - } - - /* "View.MemoryView":937 - * resultp = ( resultp)[0] + suboffset - * - * return resultp # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_resultp; - goto __pyx_L0; - - /* "View.MemoryView":910 - * - * @cname('__pyx_pybuffer_index') - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<< - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.pybuffer_index", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":943 - * - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<< - * cdef int ndim = memslice.memview.view.ndim - * - */ - -static int __pyx_memslice_transpose(__Pyx_memviewslice *__pyx_v_memslice) { - int __pyx_v_ndim; - Py_ssize_t *__pyx_v_shape; - Py_ssize_t *__pyx_v_strides; - int __pyx_v_i; - int __pyx_v_j; - int __pyx_r; - int __pyx_t_1; - Py_ssize_t *__pyx_t_2; - long __pyx_t_3; - long __pyx_t_4; - Py_ssize_t __pyx_t_5; - Py_ssize_t __pyx_t_6; - int __pyx_t_7; - int __pyx_t_8; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":944 - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: - * cdef int ndim = memslice.memview.view.ndim # <<<<<<<<<<<<<< - * - * cdef Py_ssize_t *shape = memslice.shape - */ - __pyx_t_1 = __pyx_v_memslice->memview->view.ndim; - __pyx_v_ndim = __pyx_t_1; - - /* "View.MemoryView":946 - * cdef int ndim = memslice.memview.view.ndim - * - * cdef Py_ssize_t *shape = memslice.shape # <<<<<<<<<<<<<< - * cdef Py_ssize_t *strides = memslice.strides - * - */ - __pyx_t_2 = __pyx_v_memslice->shape; - __pyx_v_shape = __pyx_t_2; - - /* "View.MemoryView":947 - * - * cdef Py_ssize_t *shape = memslice.shape - * cdef Py_ssize_t *strides = memslice.strides # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = __pyx_v_memslice->strides; - __pyx_v_strides = __pyx_t_2; - - /* "View.MemoryView":951 - * - * cdef int i, j - * for i in range(ndim / 2): # <<<<<<<<<<<<<< - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] - */ - __pyx_t_3 = __Pyx_div_long(__pyx_v_ndim, 2); - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_4; __pyx_t_1+=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":952 - * cdef int i, j - * for i in range(ndim / 2): - * j = ndim - 1 - i # <<<<<<<<<<<<<< - * strides[i], strides[j] = strides[j], strides[i] - * shape[i], shape[j] = shape[j], shape[i] - */ - __pyx_v_j = ((__pyx_v_ndim - 1) - __pyx_v_i); - - /* "View.MemoryView":953 - * for i in range(ndim / 2): - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] # <<<<<<<<<<<<<< - * shape[i], shape[j] = shape[j], shape[i] - * - */ - __pyx_t_5 = (__pyx_v_strides[__pyx_v_j]); - __pyx_t_6 = (__pyx_v_strides[__pyx_v_i]); - (__pyx_v_strides[__pyx_v_i]) = __pyx_t_5; - (__pyx_v_strides[__pyx_v_j]) = __pyx_t_6; - - /* "View.MemoryView":954 - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] - * shape[i], shape[j] = shape[j], shape[i] # <<<<<<<<<<<<<< - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: - */ - __pyx_t_6 = (__pyx_v_shape[__pyx_v_j]); - __pyx_t_5 = (__pyx_v_shape[__pyx_v_i]); - (__pyx_v_shape[__pyx_v_i]) = __pyx_t_6; - (__pyx_v_shape[__pyx_v_j]) = __pyx_t_5; - - /* "View.MemoryView":956 - * shape[i], shape[j] = shape[j], shape[i] - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<< - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - */ - __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_i]) >= 0) != 0); - if (!__pyx_t_8) { - } else { - __pyx_t_7 = __pyx_t_8; - goto __pyx_L6_bool_binop_done; - } - __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_j]) >= 0) != 0); - __pyx_t_7 = __pyx_t_8; - __pyx_L6_bool_binop_done:; - if (__pyx_t_7) { - - /* "View.MemoryView":957 - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") # <<<<<<<<<<<<<< - * - * return 1 - */ - __pyx_t_9 = __pyx_memoryview_err(__pyx_builtin_ValueError, ((char *)"Cannot transpose memoryview with indirect dimensions")); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 957, __pyx_L1_error) - - /* "View.MemoryView":956 - * shape[i], shape[j] = shape[j], shape[i] - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<< - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - */ - } - } - - /* "View.MemoryView":959 - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - * return 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 1; - goto __pyx_L0; - - /* "View.MemoryView":943 - * - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<< - * cdef int ndim = memslice.memview.view.ndim - * - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.transpose_memslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = 0; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":976 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * def __dealloc__(self): # <<<<<<<<<<<<<< - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - */ - -/* Python wrapper */ -static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":977 - * - * def __dealloc__(self): - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) # <<<<<<<<<<<<<< - * - * cdef convert_item_to_object(self, char *itemp): - */ - __PYX_XDEC_MEMVIEW((&__pyx_v_self->from_slice), 1); - - /* "View.MemoryView":976 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * def __dealloc__(self): # <<<<<<<<<<<<<< - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":979 - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) - */ - -static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("convert_item_to_object", 0); - - /* "View.MemoryView":980 - * - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: # <<<<<<<<<<<<<< - * return self.to_object_func(itemp) - * else: - */ - __pyx_t_1 = ((__pyx_v_self->to_object_func != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":981 - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) # <<<<<<<<<<<<<< - * else: - * return memoryview.convert_item_to_object(self, itemp) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_v_self->to_object_func(__pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 981, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":980 - * - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: # <<<<<<<<<<<<<< - * return self.to_object_func(itemp) - * else: - */ - } - - /* "View.MemoryView":983 - * return self.to_object_func(itemp) - * else: - * return memoryview.convert_item_to_object(self, itemp) # <<<<<<<<<<<<<< - * - * cdef assign_item_from_object(self, char *itemp, object value): - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_convert_item_to_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 983, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":979 - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":985 - * return memoryview.convert_item_to_object(self, itemp) - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) - */ - -static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assign_item_from_object", 0); - - /* "View.MemoryView":986 - * - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<< - * self.to_dtype_func(itemp, value) - * else: - */ - __pyx_t_1 = ((__pyx_v_self->to_dtype_func != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":987 - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) # <<<<<<<<<<<<<< - * else: - * memoryview.assign_item_from_object(self, itemp, value) - */ - __pyx_t_2 = __pyx_v_self->to_dtype_func(__pyx_v_itemp, __pyx_v_value); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 987, __pyx_L1_error) - - /* "View.MemoryView":986 - * - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<< - * self.to_dtype_func(itemp, value) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":989 - * self.to_dtype_func(itemp, value) - * else: - * memoryview.assign_item_from_object(self, itemp, value) # <<<<<<<<<<<<<< - * - * @property - */ - /*else*/ { - __pyx_t_3 = __pyx_memoryview_assign_item_from_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 989, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_L3:; - - /* "View.MemoryView":985 - * return memoryview.convert_item_to_object(self, itemp) - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":992 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.from_object - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":993 - * @property - * def base(self): - * return self.from_object # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->from_object); - __pyx_r = __pyx_v_self->from_object; - goto __pyx_L0; - - /* "View.MemoryView":992 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.from_object - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryviewslice___reduce_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__18, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryviewslice_2__setstate_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__19, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":999 - * - * @cname('__pyx_memoryview_fromslice') - * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<< - * int ndim, - * object (*to_object_func)(char *), - */ - -static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice __pyx_v_memviewslice, int __pyx_v_ndim, PyObject *(*__pyx_v_to_object_func)(char *), int (*__pyx_v_to_dtype_func)(char *, PyObject *), int __pyx_v_dtype_is_object) { - struct __pyx_memoryviewslice_obj *__pyx_v_result = 0; - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_v_length = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - __Pyx_TypeInfo *__pyx_t_4; - Py_buffer __pyx_t_5; - Py_ssize_t *__pyx_t_6; - Py_ssize_t *__pyx_t_7; - Py_ssize_t *__pyx_t_8; - Py_ssize_t __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_fromslice", 0); - - /* "View.MemoryView":1007 - * cdef _memoryviewslice result - * - * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<< - * return None - * - */ - __pyx_t_1 = ((((PyObject *)__pyx_v_memviewslice.memview) == Py_None) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1008 - * - * if memviewslice.memview == Py_None: - * return None # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - - /* "View.MemoryView":1007 - * cdef _memoryviewslice result - * - * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<< - * return None - * - */ - } - - /* "View.MemoryView":1013 - * - * - * result = _memoryviewslice(None, 0, dtype_is_object) # <<<<<<<<<<<<<< - * - * result.from_slice = memviewslice - */ - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyTuple_SET_ITEM(__pyx_t_3, 0, Py_None); - __Pyx_INCREF(__pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryviewslice_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1013, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1015 - * result = _memoryviewslice(None, 0, dtype_is_object) - * - * result.from_slice = memviewslice # <<<<<<<<<<<<<< - * __PYX_INC_MEMVIEW(&memviewslice, 1) - * - */ - __pyx_v_result->from_slice = __pyx_v_memviewslice; - - /* "View.MemoryView":1016 - * - * result.from_slice = memviewslice - * __PYX_INC_MEMVIEW(&memviewslice, 1) # <<<<<<<<<<<<<< - * - * result.from_object = ( memviewslice.memview).base - */ - __PYX_INC_MEMVIEW((&__pyx_v_memviewslice), 1); - - /* "View.MemoryView":1018 - * __PYX_INC_MEMVIEW(&memviewslice, 1) - * - * result.from_object = ( memviewslice.memview).base # <<<<<<<<<<<<<< - * result.typeinfo = memviewslice.memview.typeinfo - * - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_memviewslice.memview), __pyx_n_s_base); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1018, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __Pyx_GOTREF(__pyx_v_result->from_object); - __Pyx_DECREF(__pyx_v_result->from_object); - __pyx_v_result->from_object = __pyx_t_2; - __pyx_t_2 = 0; - - /* "View.MemoryView":1019 - * - * result.from_object = ( memviewslice.memview).base - * result.typeinfo = memviewslice.memview.typeinfo # <<<<<<<<<<<<<< - * - * result.view = memviewslice.memview.view - */ - __pyx_t_4 = __pyx_v_memviewslice.memview->typeinfo; - __pyx_v_result->__pyx_base.typeinfo = __pyx_t_4; - - /* "View.MemoryView":1021 - * result.typeinfo = memviewslice.memview.typeinfo - * - * result.view = memviewslice.memview.view # <<<<<<<<<<<<<< - * result.view.buf = memviewslice.data - * result.view.ndim = ndim - */ - __pyx_t_5 = __pyx_v_memviewslice.memview->view; - __pyx_v_result->__pyx_base.view = __pyx_t_5; - - /* "View.MemoryView":1022 - * - * result.view = memviewslice.memview.view - * result.view.buf = memviewslice.data # <<<<<<<<<<<<<< - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None - */ - __pyx_v_result->__pyx_base.view.buf = ((void *)__pyx_v_memviewslice.data); - - /* "View.MemoryView":1023 - * result.view = memviewslice.memview.view - * result.view.buf = memviewslice.data - * result.view.ndim = ndim # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &result.view).obj = Py_None - * Py_INCREF(Py_None) - */ - __pyx_v_result->__pyx_base.view.ndim = __pyx_v_ndim; - - /* "View.MemoryView":1024 - * result.view.buf = memviewslice.data - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_result->__pyx_base.view))->obj = Py_None; - - /* "View.MemoryView":1025 - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: - */ - Py_INCREF(Py_None); - - /* "View.MemoryView":1027 - * Py_INCREF(Py_None) - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<< - * result.flags = PyBUF_RECORDS - * else: - */ - __pyx_t_1 = ((((struct __pyx_memoryview_obj *)__pyx_v_memviewslice.memview)->flags & PyBUF_WRITABLE) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1028 - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: - * result.flags = PyBUF_RECORDS # <<<<<<<<<<<<<< - * else: - * result.flags = PyBUF_RECORDS_RO - */ - __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS; - - /* "View.MemoryView":1027 - * Py_INCREF(Py_None) - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<< - * result.flags = PyBUF_RECORDS - * else: - */ - goto __pyx_L4; - } - - /* "View.MemoryView":1030 - * result.flags = PyBUF_RECORDS - * else: - * result.flags = PyBUF_RECORDS_RO # <<<<<<<<<<<<<< - * - * result.view.shape = result.from_slice.shape - */ - /*else*/ { - __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS_RO; - } - __pyx_L4:; - - /* "View.MemoryView":1032 - * result.flags = PyBUF_RECORDS_RO - * - * result.view.shape = result.from_slice.shape # <<<<<<<<<<<<<< - * result.view.strides = result.from_slice.strides - * - */ - __pyx_v_result->__pyx_base.view.shape = ((Py_ssize_t *)__pyx_v_result->from_slice.shape); - - /* "View.MemoryView":1033 - * - * result.view.shape = result.from_slice.shape - * result.view.strides = result.from_slice.strides # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_result->__pyx_base.view.strides = ((Py_ssize_t *)__pyx_v_result->from_slice.strides); - - /* "View.MemoryView":1036 - * - * - * result.view.suboffsets = NULL # <<<<<<<<<<<<<< - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: - */ - __pyx_v_result->__pyx_base.view.suboffsets = NULL; - - /* "View.MemoryView":1037 - * - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: # <<<<<<<<<<<<<< - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets - */ - __pyx_t_7 = (__pyx_v_result->from_slice.suboffsets + __pyx_v_ndim); - for (__pyx_t_8 = __pyx_v_result->from_slice.suboffsets; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) { - __pyx_t_6 = __pyx_t_8; - __pyx_v_suboffset = (__pyx_t_6[0]); - - /* "View.MemoryView":1038 - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * result.view.suboffsets = result.from_slice.suboffsets - * break - */ - __pyx_t_1 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1039 - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_result->__pyx_base.view.suboffsets = ((Py_ssize_t *)__pyx_v_result->from_slice.suboffsets); - - /* "View.MemoryView":1040 - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets - * break # <<<<<<<<<<<<<< - * - * result.view.len = result.view.itemsize - */ - goto __pyx_L6_break; - - /* "View.MemoryView":1038 - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * result.view.suboffsets = result.from_slice.suboffsets - * break - */ - } - } - __pyx_L6_break:; - - /* "View.MemoryView":1042 - * break - * - * result.view.len = result.view.itemsize # <<<<<<<<<<<<<< - * for length in result.view.shape[:ndim]: - * result.view.len *= length - */ - __pyx_t_9 = __pyx_v_result->__pyx_base.view.itemsize; - __pyx_v_result->__pyx_base.view.len = __pyx_t_9; - - /* "View.MemoryView":1043 - * - * result.view.len = result.view.itemsize - * for length in result.view.shape[:ndim]: # <<<<<<<<<<<<<< - * result.view.len *= length - * - */ - __pyx_t_7 = (__pyx_v_result->__pyx_base.view.shape + __pyx_v_ndim); - for (__pyx_t_8 = __pyx_v_result->__pyx_base.view.shape; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) { - __pyx_t_6 = __pyx_t_8; - __pyx_t_2 = PyInt_FromSsize_t((__pyx_t_6[0])); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1043, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1044 - * result.view.len = result.view.itemsize - * for length in result.view.shape[:ndim]: - * result.view.len *= length # <<<<<<<<<<<<<< - * - * result.to_object_func = to_object_func - */ - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_result->__pyx_base.view.len); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_InPlaceMultiply(__pyx_t_2, __pyx_v_length); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_3); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 1044, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result->__pyx_base.view.len = __pyx_t_9; - } - - /* "View.MemoryView":1046 - * result.view.len *= length - * - * result.to_object_func = to_object_func # <<<<<<<<<<<<<< - * result.to_dtype_func = to_dtype_func - * - */ - __pyx_v_result->to_object_func = __pyx_v_to_object_func; - - /* "View.MemoryView":1047 - * - * result.to_object_func = to_object_func - * result.to_dtype_func = to_dtype_func # <<<<<<<<<<<<<< - * - * return result - */ - __pyx_v_result->to_dtype_func = __pyx_v_to_dtype_func; - - /* "View.MemoryView":1049 - * result.to_dtype_func = to_dtype_func - * - * return result # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":999 - * - * @cname('__pyx_memoryview_fromslice') - * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<< - * int ndim, - * object (*to_object_func)(char *), - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview_fromslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XDECREF(__pyx_v_length); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1052 - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - */ - -static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_mslice) { - struct __pyx_memoryviewslice_obj *__pyx_v_obj = 0; - __Pyx_memviewslice *__pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_slice_from_memview", 0); - - /* "View.MemoryView":1055 - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * obj = memview - * return &obj.from_slice - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1056 - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): - * obj = memview # <<<<<<<<<<<<<< - * return &obj.from_slice - * else: - */ - if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 1056, __pyx_L1_error) - __pyx_t_3 = ((PyObject *)__pyx_v_memview); - __Pyx_INCREF(__pyx_t_3); - __pyx_v_obj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":1057 - * if isinstance(memview, _memoryviewslice): - * obj = memview - * return &obj.from_slice # <<<<<<<<<<<<<< - * else: - * slice_copy(memview, mslice) - */ - __pyx_r = (&__pyx_v_obj->from_slice); - goto __pyx_L0; - - /* "View.MemoryView":1055 - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * obj = memview - * return &obj.from_slice - */ - } - - /* "View.MemoryView":1059 - * return &obj.from_slice - * else: - * slice_copy(memview, mslice) # <<<<<<<<<<<<<< - * return mslice - * - */ - /*else*/ { - __pyx_memoryview_slice_copy(__pyx_v_memview, __pyx_v_mslice); - - /* "View.MemoryView":1060 - * else: - * slice_copy(memview, mslice) - * return mslice # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_slice_copy') - */ - __pyx_r = __pyx_v_mslice; - goto __pyx_L0; - } - - /* "View.MemoryView":1052 - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.get_slice_from_memview", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_obj); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1063 - * - * @cname('__pyx_memoryview_slice_copy') - * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst): # <<<<<<<<<<<<<< - * cdef int dim - * cdef (Py_ssize_t*) shape, strides, suboffsets - */ - -static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_dst) { - int __pyx_v_dim; - Py_ssize_t *__pyx_v_shape; - Py_ssize_t *__pyx_v_strides; - Py_ssize_t *__pyx_v_suboffsets; - __Pyx_RefNannyDeclarations - Py_ssize_t *__pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - Py_ssize_t __pyx_t_5; - __Pyx_RefNannySetupContext("slice_copy", 0); - - /* "View.MemoryView":1067 - * cdef (Py_ssize_t*) shape, strides, suboffsets - * - * shape = memview.view.shape # <<<<<<<<<<<<<< - * strides = memview.view.strides - * suboffsets = memview.view.suboffsets - */ - __pyx_t_1 = __pyx_v_memview->view.shape; - __pyx_v_shape = __pyx_t_1; - - /* "View.MemoryView":1068 - * - * shape = memview.view.shape - * strides = memview.view.strides # <<<<<<<<<<<<<< - * suboffsets = memview.view.suboffsets - * - */ - __pyx_t_1 = __pyx_v_memview->view.strides; - __pyx_v_strides = __pyx_t_1; - - /* "View.MemoryView":1069 - * shape = memview.view.shape - * strides = memview.view.strides - * suboffsets = memview.view.suboffsets # <<<<<<<<<<<<<< - * - * dst.memview = <__pyx_memoryview *> memview - */ - __pyx_t_1 = __pyx_v_memview->view.suboffsets; - __pyx_v_suboffsets = __pyx_t_1; - - /* "View.MemoryView":1071 - * suboffsets = memview.view.suboffsets - * - * dst.memview = <__pyx_memoryview *> memview # <<<<<<<<<<<<<< - * dst.data = memview.view.buf - * - */ - __pyx_v_dst->memview = ((struct __pyx_memoryview_obj *)__pyx_v_memview); - - /* "View.MemoryView":1072 - * - * dst.memview = <__pyx_memoryview *> memview - * dst.data = memview.view.buf # <<<<<<<<<<<<<< - * - * for dim in range(memview.view.ndim): - */ - __pyx_v_dst->data = ((char *)__pyx_v_memview->view.buf); - - /* "View.MemoryView":1074 - * dst.data = memview.view.buf - * - * for dim in range(memview.view.ndim): # <<<<<<<<<<<<<< - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] - */ - __pyx_t_2 = __pyx_v_memview->view.ndim; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_dim = __pyx_t_4; - - /* "View.MemoryView":1075 - * - * for dim in range(memview.view.ndim): - * dst.shape[dim] = shape[dim] # <<<<<<<<<<<<<< - * dst.strides[dim] = strides[dim] - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 - */ - (__pyx_v_dst->shape[__pyx_v_dim]) = (__pyx_v_shape[__pyx_v_dim]); - - /* "View.MemoryView":1076 - * for dim in range(memview.view.ndim): - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] # <<<<<<<<<<<<<< - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 - * - */ - (__pyx_v_dst->strides[__pyx_v_dim]) = (__pyx_v_strides[__pyx_v_dim]); - - /* "View.MemoryView":1077 - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_object') - */ - if ((__pyx_v_suboffsets != 0)) { - __pyx_t_5 = (__pyx_v_suboffsets[__pyx_v_dim]); - } else { - __pyx_t_5 = -1L; - } - (__pyx_v_dst->suboffsets[__pyx_v_dim]) = __pyx_t_5; - } - - /* "View.MemoryView":1063 - * - * @cname('__pyx_memoryview_slice_copy') - * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst): # <<<<<<<<<<<<<< - * cdef int dim - * cdef (Py_ssize_t*) shape, strides, suboffsets - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":1080 - * - * @cname('__pyx_memoryview_copy_object') - * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<< - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - */ - -static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *__pyx_v_memview) { - __Pyx_memviewslice __pyx_v_memviewslice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_copy", 0); - - /* "View.MemoryView":1083 - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - * slice_copy(memview, &memviewslice) # <<<<<<<<<<<<<< - * return memoryview_copy_from_slice(memview, &memviewslice) - * - */ - __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_memviewslice)); - - /* "View.MemoryView":1084 - * cdef __Pyx_memviewslice memviewslice - * slice_copy(memview, &memviewslice) - * return memoryview_copy_from_slice(memview, &memviewslice) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_object_from_slice') - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_memoryview_copy_object_from_slice(__pyx_v_memview, (&__pyx_v_memviewslice)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1084, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":1080 - * - * @cname('__pyx_memoryview_copy_object') - * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<< - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview_copy", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1087 - * - * @cname('__pyx_memoryview_copy_object_from_slice') - * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<< - * """ - * Create a new memoryview object from a given memoryview object and slice. - */ - -static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_memviewslice) { - PyObject *(*__pyx_v_to_object_func)(char *); - int (*__pyx_v_to_dtype_func)(char *, PyObject *); - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *(*__pyx_t_3)(char *); - int (*__pyx_t_4)(char *, PyObject *); - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_copy_from_slice", 0); - - /* "View.MemoryView":1094 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1095 - * - * if isinstance(memview, _memoryviewslice): - * to_object_func = (<_memoryviewslice> memview).to_object_func # <<<<<<<<<<<<<< - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - * else: - */ - __pyx_t_3 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_object_func; - __pyx_v_to_object_func = __pyx_t_3; - - /* "View.MemoryView":1096 - * if isinstance(memview, _memoryviewslice): - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func # <<<<<<<<<<<<<< - * else: - * to_object_func = NULL - */ - __pyx_t_4 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_dtype_func; - __pyx_v_to_dtype_func = __pyx_t_4; - - /* "View.MemoryView":1094 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1098 - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - * else: - * to_object_func = NULL # <<<<<<<<<<<<<< - * to_dtype_func = NULL - * - */ - /*else*/ { - __pyx_v_to_object_func = NULL; - - /* "View.MemoryView":1099 - * else: - * to_object_func = NULL - * to_dtype_func = NULL # <<<<<<<<<<<<<< - * - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, - */ - __pyx_v_to_dtype_func = NULL; - } - __pyx_L3:; - - /* "View.MemoryView":1101 - * to_dtype_func = NULL - * - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, # <<<<<<<<<<<<<< - * to_object_func, to_dtype_func, - * memview.dtype_is_object) - */ - __Pyx_XDECREF(__pyx_r); - - /* "View.MemoryView":1103 - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, - * to_object_func, to_dtype_func, - * memview.dtype_is_object) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __pyx_memoryview_fromslice((__pyx_v_memviewslice[0]), __pyx_v_memview->view.ndim, __pyx_v_to_object_func, __pyx_v_to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 1101, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "View.MemoryView":1087 - * - * @cname('__pyx_memoryview_copy_object_from_slice') - * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<< - * """ - * Create a new memoryview object from a given memoryview object and slice. - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview_copy_from_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1109 - * - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: # <<<<<<<<<<<<<< - * if arg < 0: - * return -arg - */ - -static Py_ssize_t abs_py_ssize_t(Py_ssize_t __pyx_v_arg) { - Py_ssize_t __pyx_r; - int __pyx_t_1; - - /* "View.MemoryView":1110 - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: # <<<<<<<<<<<<<< - * return -arg - * else: - */ - __pyx_t_1 = ((__pyx_v_arg < 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1111 - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: - * return -arg # <<<<<<<<<<<<<< - * else: - * return arg - */ - __pyx_r = (-__pyx_v_arg); - goto __pyx_L0; - - /* "View.MemoryView":1110 - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: # <<<<<<<<<<<<<< - * return -arg - * else: - */ - } - - /* "View.MemoryView":1113 - * return -arg - * else: - * return arg # <<<<<<<<<<<<<< - * - * @cname('__pyx_get_best_slice_order') - */ - /*else*/ { - __pyx_r = __pyx_v_arg; - goto __pyx_L0; - } - - /* "View.MemoryView":1109 - * - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: # <<<<<<<<<<<<<< - * if arg < 0: - * return -arg - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1116 - * - * @cname('__pyx_get_best_slice_order') - * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) nogil: # <<<<<<<<<<<<<< - * """ - * Figure out the best memory access order for a given slice. - */ - -static char __pyx_get_best_slice_order(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim) { - int __pyx_v_i; - Py_ssize_t __pyx_v_c_stride; - Py_ssize_t __pyx_v_f_stride; - char __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - - /* "View.MemoryView":1121 - * """ - * cdef int i - * cdef Py_ssize_t c_stride = 0 # <<<<<<<<<<<<<< - * cdef Py_ssize_t f_stride = 0 - * - */ - __pyx_v_c_stride = 0; - - /* "View.MemoryView":1122 - * cdef int i - * cdef Py_ssize_t c_stride = 0 - * cdef Py_ssize_t f_stride = 0 # <<<<<<<<<<<<<< - * - * for i in range(ndim - 1, -1, -1): - */ - __pyx_v_f_stride = 0; - - /* "View.MemoryView":1124 - * cdef Py_ssize_t f_stride = 0 - * - * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] - */ - for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":1125 - * - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * c_stride = mslice.strides[i] - * break - */ - __pyx_t_2 = (((__pyx_v_mslice->shape[__pyx_v_i]) > 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1126 - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_c_stride = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1127 - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] - * break # <<<<<<<<<<<<<< - * - * for i in range(ndim): - */ - goto __pyx_L4_break; - - /* "View.MemoryView":1125 - * - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * c_stride = mslice.strides[i] - * break - */ - } - } - __pyx_L4_break:; - - /* "View.MemoryView":1129 - * break - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] - */ - __pyx_t_1 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_1; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1130 - * - * for i in range(ndim): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * f_stride = mslice.strides[i] - * break - */ - __pyx_t_2 = (((__pyx_v_mslice->shape[__pyx_v_i]) > 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1131 - * for i in range(ndim): - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_f_stride = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1132 - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] - * break # <<<<<<<<<<<<<< - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): - */ - goto __pyx_L7_break; - - /* "View.MemoryView":1130 - * - * for i in range(ndim): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * f_stride = mslice.strides[i] - * break - */ - } - } - __pyx_L7_break:; - - /* "View.MemoryView":1134 - * break - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<< - * return 'C' - * else: - */ - __pyx_t_2 = ((abs_py_ssize_t(__pyx_v_c_stride) <= abs_py_ssize_t(__pyx_v_f_stride)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1135 - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): - * return 'C' # <<<<<<<<<<<<<< - * else: - * return 'F' - */ - __pyx_r = 'C'; - goto __pyx_L0; - - /* "View.MemoryView":1134 - * break - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<< - * return 'C' - * else: - */ - } - - /* "View.MemoryView":1137 - * return 'C' - * else: - * return 'F' # <<<<<<<<<<<<<< - * - * @cython.cdivision(True) - */ - /*else*/ { - __pyx_r = 'F'; - goto __pyx_L0; - } - - /* "View.MemoryView":1116 - * - * @cname('__pyx_get_best_slice_order') - * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) nogil: # <<<<<<<<<<<<<< - * """ - * Figure out the best memory access order for a given slice. - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1140 - * - * @cython.cdivision(True) - * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<< - * char *dst_data, Py_ssize_t *dst_strides, - * Py_ssize_t *src_shape, Py_ssize_t *dst_shape, - */ - -static void _copy_strided_to_strided(char *__pyx_v_src_data, Py_ssize_t *__pyx_v_src_strides, char *__pyx_v_dst_data, Py_ssize_t *__pyx_v_dst_strides, Py_ssize_t *__pyx_v_src_shape, Py_ssize_t *__pyx_v_dst_shape, int __pyx_v_ndim, size_t __pyx_v_itemsize) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - CYTHON_UNUSED Py_ssize_t __pyx_v_src_extent; - Py_ssize_t __pyx_v_dst_extent; - Py_ssize_t __pyx_v_src_stride; - Py_ssize_t __pyx_v_dst_stride; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - Py_ssize_t __pyx_t_4; - Py_ssize_t __pyx_t_5; - Py_ssize_t __pyx_t_6; - - /* "View.MemoryView":1147 - * - * cdef Py_ssize_t i - * cdef Py_ssize_t src_extent = src_shape[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] - */ - __pyx_v_src_extent = (__pyx_v_src_shape[0]); - - /* "View.MemoryView":1148 - * cdef Py_ssize_t i - * cdef Py_ssize_t src_extent = src_shape[0] - * cdef Py_ssize_t dst_extent = dst_shape[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t src_stride = src_strides[0] - * cdef Py_ssize_t dst_stride = dst_strides[0] - */ - __pyx_v_dst_extent = (__pyx_v_dst_shape[0]); - - /* "View.MemoryView":1149 - * cdef Py_ssize_t src_extent = src_shape[0] - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - */ - __pyx_v_src_stride = (__pyx_v_src_strides[0]); - - /* "View.MemoryView":1150 - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] - * cdef Py_ssize_t dst_stride = dst_strides[0] # <<<<<<<<<<<<<< - * - * if ndim == 1: - */ - __pyx_v_dst_stride = (__pyx_v_dst_strides[0]); - - /* "View.MemoryView":1152 - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - */ - __pyx_t_1 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1153 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - __pyx_t_2 = ((__pyx_v_src_stride > 0) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L5_bool_binop_done; - } - __pyx_t_2 = ((__pyx_v_dst_stride > 0) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L5_bool_binop_done; - } - - /* "View.MemoryView":1154 - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): # <<<<<<<<<<<<<< - * memcpy(dst_data, src_data, itemsize * dst_extent) - * else: - */ - __pyx_t_2 = (((size_t)__pyx_v_src_stride) == __pyx_v_itemsize); - if (__pyx_t_2) { - __pyx_t_2 = (__pyx_v_itemsize == ((size_t)__pyx_v_dst_stride)); - } - __pyx_t_3 = (__pyx_t_2 != 0); - __pyx_t_1 = __pyx_t_3; - __pyx_L5_bool_binop_done:; - - /* "View.MemoryView":1153 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - if (__pyx_t_1) { - - /* "View.MemoryView":1155 - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) # <<<<<<<<<<<<<< - * else: - * for i in range(dst_extent): - */ - (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, (__pyx_v_itemsize * __pyx_v_dst_extent))); - - /* "View.MemoryView":1153 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - goto __pyx_L4; - } - - /* "View.MemoryView":1157 - * memcpy(dst_data, src_data, itemsize * dst_extent) - * else: - * for i in range(dst_extent): # <<<<<<<<<<<<<< - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride - */ - /*else*/ { - __pyx_t_4 = __pyx_v_dst_extent; - __pyx_t_5 = __pyx_t_4; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1158 - * else: - * for i in range(dst_extent): - * memcpy(dst_data, src_data, itemsize) # <<<<<<<<<<<<<< - * src_data += src_stride - * dst_data += dst_stride - */ - (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, __pyx_v_itemsize)); - - /* "View.MemoryView":1159 - * for i in range(dst_extent): - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride # <<<<<<<<<<<<<< - * dst_data += dst_stride - * else: - */ - __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride); - - /* "View.MemoryView":1160 - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride - * dst_data += dst_stride # <<<<<<<<<<<<<< - * else: - * for i in range(dst_extent): - */ - __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride); - } - } - __pyx_L4:; - - /* "View.MemoryView":1152 - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1162 - * dst_data += dst_stride - * else: - * for i in range(dst_extent): # <<<<<<<<<<<<<< - * _copy_strided_to_strided(src_data, src_strides + 1, - * dst_data, dst_strides + 1, - */ - /*else*/ { - __pyx_t_4 = __pyx_v_dst_extent; - __pyx_t_5 = __pyx_t_4; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1163 - * else: - * for i in range(dst_extent): - * _copy_strided_to_strided(src_data, src_strides + 1, # <<<<<<<<<<<<<< - * dst_data, dst_strides + 1, - * src_shape + 1, dst_shape + 1, - */ - _copy_strided_to_strided(__pyx_v_src_data, (__pyx_v_src_strides + 1), __pyx_v_dst_data, (__pyx_v_dst_strides + 1), (__pyx_v_src_shape + 1), (__pyx_v_dst_shape + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize); - - /* "View.MemoryView":1167 - * src_shape + 1, dst_shape + 1, - * ndim - 1, itemsize) - * src_data += src_stride # <<<<<<<<<<<<<< - * dst_data += dst_stride - * - */ - __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride); - - /* "View.MemoryView":1168 - * ndim - 1, itemsize) - * src_data += src_stride - * dst_data += dst_stride # <<<<<<<<<<<<<< - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, - */ - __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride); - } - } - __pyx_L3:; - - /* "View.MemoryView":1140 - * - * @cython.cdivision(True) - * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<< - * char *dst_data, Py_ssize_t *dst_strides, - * Py_ssize_t *src_shape, Py_ssize_t *dst_shape, - */ - - /* function exit code */ -} - -/* "View.MemoryView":1170 - * dst_data += dst_stride - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - */ - -static void copy_strided_to_strided(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize) { - - /* "View.MemoryView":1173 - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - * _copy_strided_to_strided(src.data, src.strides, dst.data, dst.strides, # <<<<<<<<<<<<<< - * src.shape, dst.shape, ndim, itemsize) - * - */ - _copy_strided_to_strided(__pyx_v_src->data, __pyx_v_src->strides, __pyx_v_dst->data, __pyx_v_dst->strides, __pyx_v_src->shape, __pyx_v_dst->shape, __pyx_v_ndim, __pyx_v_itemsize); - - /* "View.MemoryView":1170 - * dst_data += dst_stride - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1177 - * - * @cname('__pyx_memoryview_slice_get_size') - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: # <<<<<<<<<<<<<< - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - */ - -static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *__pyx_v_src, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_shape; - Py_ssize_t __pyx_v_size; - Py_ssize_t __pyx_r; - Py_ssize_t __pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - - /* "View.MemoryView":1179 - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize # <<<<<<<<<<<<<< - * - * for shape in src.shape[:ndim]: - */ - __pyx_t_1 = __pyx_v_src->memview->view.itemsize; - __pyx_v_size = __pyx_t_1; - - /* "View.MemoryView":1181 - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - * - * for shape in src.shape[:ndim]: # <<<<<<<<<<<<<< - * size *= shape - * - */ - __pyx_t_3 = (__pyx_v_src->shape + __pyx_v_ndim); - for (__pyx_t_4 = __pyx_v_src->shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_v_shape = (__pyx_t_2[0]); - - /* "View.MemoryView":1182 - * - * for shape in src.shape[:ndim]: - * size *= shape # <<<<<<<<<<<<<< - * - * return size - */ - __pyx_v_size = (__pyx_v_size * __pyx_v_shape); - } - - /* "View.MemoryView":1184 - * size *= shape - * - * return size # <<<<<<<<<<<<<< - * - * @cname('__pyx_fill_contig_strides_array') - */ - __pyx_r = __pyx_v_size; - goto __pyx_L0; - - /* "View.MemoryView":1177 - * - * @cname('__pyx_memoryview_slice_get_size') - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: # <<<<<<<<<<<<<< - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1187 - * - * @cname('__pyx_fill_contig_strides_array') - * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<< - * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride, - * int ndim, char order) nogil: - */ - -static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, Py_ssize_t __pyx_v_stride, int __pyx_v_ndim, char __pyx_v_order) { - int __pyx_v_idx; - Py_ssize_t __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - - /* "View.MemoryView":1196 - * cdef int idx - * - * if order == 'F': # <<<<<<<<<<<<<< - * for idx in range(ndim): - * strides[idx] = stride - */ - __pyx_t_1 = ((__pyx_v_order == 'F') != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1197 - * - * if order == 'F': - * for idx in range(ndim): # <<<<<<<<<<<<<< - * strides[idx] = stride - * stride *= shape[idx] - */ - __pyx_t_2 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_idx = __pyx_t_4; - - /* "View.MemoryView":1198 - * if order == 'F': - * for idx in range(ndim): - * strides[idx] = stride # <<<<<<<<<<<<<< - * stride *= shape[idx] - * else: - */ - (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride; - - /* "View.MemoryView":1199 - * for idx in range(ndim): - * strides[idx] = stride - * stride *= shape[idx] # <<<<<<<<<<<<<< - * else: - * for idx in range(ndim - 1, -1, -1): - */ - __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx])); - } - - /* "View.MemoryView":1196 - * cdef int idx - * - * if order == 'F': # <<<<<<<<<<<<<< - * for idx in range(ndim): - * strides[idx] = stride - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1201 - * stride *= shape[idx] - * else: - * for idx in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * strides[idx] = stride - * stride *= shape[idx] - */ - /*else*/ { - for (__pyx_t_2 = (__pyx_v_ndim - 1); __pyx_t_2 > -1; __pyx_t_2-=1) { - __pyx_v_idx = __pyx_t_2; - - /* "View.MemoryView":1202 - * else: - * for idx in range(ndim - 1, -1, -1): - * strides[idx] = stride # <<<<<<<<<<<<<< - * stride *= shape[idx] - * - */ - (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride; - - /* "View.MemoryView":1203 - * for idx in range(ndim - 1, -1, -1): - * strides[idx] = stride - * stride *= shape[idx] # <<<<<<<<<<<<<< - * - * return stride - */ - __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx])); - } - } - __pyx_L3:; - - /* "View.MemoryView":1205 - * stride *= shape[idx] - * - * return stride # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_data_to_temp') - */ - __pyx_r = __pyx_v_stride; - goto __pyx_L0; - - /* "View.MemoryView":1187 - * - * @cname('__pyx_fill_contig_strides_array') - * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<< - * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride, - * int ndim, char order) nogil: - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1208 - * - * @cname('__pyx_memoryview_copy_data_to_temp') - * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *tmpslice, - * char order, - */ - -static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_tmpslice, char __pyx_v_order, int __pyx_v_ndim) { - int __pyx_v_i; - void *__pyx_v_result; - size_t __pyx_v_itemsize; - size_t __pyx_v_size; - void *__pyx_r; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - struct __pyx_memoryview_obj *__pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":1219 - * cdef void *result - * - * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<< - * cdef size_t size = slice_get_size(src, ndim) - * - */ - __pyx_t_1 = __pyx_v_src->memview->view.itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":1220 - * - * cdef size_t itemsize = src.memview.view.itemsize - * cdef size_t size = slice_get_size(src, ndim) # <<<<<<<<<<<<<< - * - * result = malloc(size) - */ - __pyx_v_size = __pyx_memoryview_slice_get_size(__pyx_v_src, __pyx_v_ndim); - - /* "View.MemoryView":1222 - * cdef size_t size = slice_get_size(src, ndim) - * - * result = malloc(size) # <<<<<<<<<<<<<< - * if not result: - * _err(MemoryError, NULL) - */ - __pyx_v_result = malloc(__pyx_v_size); - - /* "View.MemoryView":1223 - * - * result = malloc(size) - * if not result: # <<<<<<<<<<<<<< - * _err(MemoryError, NULL) - * - */ - __pyx_t_2 = ((!(__pyx_v_result != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1224 - * result = malloc(size) - * if not result: - * _err(MemoryError, NULL) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_err(__pyx_builtin_MemoryError, NULL); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 1224, __pyx_L1_error) - - /* "View.MemoryView":1223 - * - * result = malloc(size) - * if not result: # <<<<<<<<<<<<<< - * _err(MemoryError, NULL) - * - */ - } - - /* "View.MemoryView":1227 - * - * - * tmpslice.data = result # <<<<<<<<<<<<<< - * tmpslice.memview = src.memview - * for i in range(ndim): - */ - __pyx_v_tmpslice->data = ((char *)__pyx_v_result); - - /* "View.MemoryView":1228 - * - * tmpslice.data = result - * tmpslice.memview = src.memview # <<<<<<<<<<<<<< - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] - */ - __pyx_t_4 = __pyx_v_src->memview; - __pyx_v_tmpslice->memview = __pyx_t_4; - - /* "View.MemoryView":1229 - * tmpslice.data = result - * tmpslice.memview = src.memview - * for i in range(ndim): # <<<<<<<<<<<<<< - * tmpslice.shape[i] = src.shape[i] - * tmpslice.suboffsets[i] = -1 - */ - __pyx_t_3 = __pyx_v_ndim; - __pyx_t_5 = __pyx_t_3; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1230 - * tmpslice.memview = src.memview - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] # <<<<<<<<<<<<<< - * tmpslice.suboffsets[i] = -1 - * - */ - (__pyx_v_tmpslice->shape[__pyx_v_i]) = (__pyx_v_src->shape[__pyx_v_i]); - - /* "View.MemoryView":1231 - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] - * tmpslice.suboffsets[i] = -1 # <<<<<<<<<<<<<< - * - * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, - */ - (__pyx_v_tmpslice->suboffsets[__pyx_v_i]) = -1L; - } - - /* "View.MemoryView":1233 - * tmpslice.suboffsets[i] = -1 - * - * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, # <<<<<<<<<<<<<< - * ndim, order) - * - */ - (void)(__pyx_fill_contig_strides_array((&(__pyx_v_tmpslice->shape[0])), (&(__pyx_v_tmpslice->strides[0])), __pyx_v_itemsize, __pyx_v_ndim, __pyx_v_order)); - - /* "View.MemoryView":1237 - * - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if tmpslice.shape[i] == 1: - * tmpslice.strides[i] = 0 - */ - __pyx_t_3 = __pyx_v_ndim; - __pyx_t_5 = __pyx_t_3; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1238 - * - * for i in range(ndim): - * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<< - * tmpslice.strides[i] = 0 - * - */ - __pyx_t_2 = (((__pyx_v_tmpslice->shape[__pyx_v_i]) == 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1239 - * for i in range(ndim): - * if tmpslice.shape[i] == 1: - * tmpslice.strides[i] = 0 # <<<<<<<<<<<<<< - * - * if slice_is_contig(src[0], order, ndim): - */ - (__pyx_v_tmpslice->strides[__pyx_v_i]) = 0; - - /* "View.MemoryView":1238 - * - * for i in range(ndim): - * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<< - * tmpslice.strides[i] = 0 - * - */ - } - } - - /* "View.MemoryView":1241 - * tmpslice.strides[i] = 0 - * - * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<< - * memcpy(result, src.data, size) - * else: - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig((__pyx_v_src[0]), __pyx_v_order, __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1242 - * - * if slice_is_contig(src[0], order, ndim): - * memcpy(result, src.data, size) # <<<<<<<<<<<<<< - * else: - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) - */ - (void)(memcpy(__pyx_v_result, __pyx_v_src->data, __pyx_v_size)); - - /* "View.MemoryView":1241 - * tmpslice.strides[i] = 0 - * - * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<< - * memcpy(result, src.data, size) - * else: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":1244 - * memcpy(result, src.data, size) - * else: - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) # <<<<<<<<<<<<<< - * - * return result - */ - /*else*/ { - copy_strided_to_strided(__pyx_v_src, __pyx_v_tmpslice, __pyx_v_ndim, __pyx_v_itemsize); - } - __pyx_L9:; - - /* "View.MemoryView":1246 - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) - * - * return result # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_result; - goto __pyx_L0; - - /* "View.MemoryView":1208 - * - * @cname('__pyx_memoryview_copy_data_to_temp') - * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *tmpslice, - * char order, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.copy_data_to_temp", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = NULL; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1251 - * - * @cname('__pyx_memoryview_err_extents') - * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<< - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - */ - -static int __pyx_memoryview_err_extents(int __pyx_v_i, Py_ssize_t __pyx_v_extent1, Py_ssize_t __pyx_v_extent2) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_extents", 0); - - /* "View.MemoryView":1254 - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - * (i, extent1, extent2)) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err_dim') - */ - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_i); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_extent1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_extent2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1254, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_3); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_3 = 0; - - /* "View.MemoryView":1253 - * cdef int _err_extents(int i, Py_ssize_t extent1, - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % # <<<<<<<<<<<<<< - * (i, extent1, extent2)) - * - */ - __pyx_t_3 = __Pyx_PyString_Format(__pyx_kp_s_got_differing_extents_in_dimensi, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_4, 0, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __PYX_ERR(1, 1253, __pyx_L1_error) - - /* "View.MemoryView":1251 - * - * @cname('__pyx_memoryview_err_extents') - * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<< - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView._err_extents", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1257 - * - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii') % dim) - * - */ - -static int __pyx_memoryview_err_dim(PyObject *__pyx_v_error, char *__pyx_v_msg, int __pyx_v_dim) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_dim", 0); - __Pyx_INCREF(__pyx_v_error); - - /* "View.MemoryView":1258 - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: - * raise error(msg.decode('ascii') % dim) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err') - */ - __pyx_t_2 = __Pyx_decode_c_string(__pyx_v_msg, 0, strlen(__pyx_v_msg), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyUnicode_Format(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_INCREF(__pyx_v_error); - __pyx_t_3 = __pyx_v_error; __pyx_t_2 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - } - } - __pyx_t_1 = (__pyx_t_2) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_2, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1258, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 1258, __pyx_L1_error) - - /* "View.MemoryView":1257 - * - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii') % dim) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView._err_dim", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_XDECREF(__pyx_v_error); - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1261 - * - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: # <<<<<<<<<<<<<< - * if msg != NULL: - * raise error(msg.decode('ascii')) - */ - -static int __pyx_memoryview_err(PyObject *__pyx_v_error, char *__pyx_v_msg) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err", 0); - __Pyx_INCREF(__pyx_v_error); - - /* "View.MemoryView":1262 - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii')) - * else: - */ - __pyx_t_1 = ((__pyx_v_msg != NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":1263 - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: - * raise error(msg.decode('ascii')) # <<<<<<<<<<<<<< - * else: - * raise error - */ - __pyx_t_3 = __Pyx_decode_c_string(__pyx_v_msg, 0, strlen(__pyx_v_msg), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1263, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_error); - __pyx_t_4 = __pyx_v_error; __pyx_t_5 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - } - } - __pyx_t_2 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_5, __pyx_t_3) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1263, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 1263, __pyx_L1_error) - - /* "View.MemoryView":1262 - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii')) - * else: - */ - } - - /* "View.MemoryView":1265 - * raise error(msg.decode('ascii')) - * else: - * raise error # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_contents') - */ - /*else*/ { - __Pyx_Raise(__pyx_v_error, 0, 0, 0); - __PYX_ERR(1, 1265, __pyx_L1_error) - } - - /* "View.MemoryView":1261 - * - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: # <<<<<<<<<<<<<< - * if msg != NULL: - * raise error(msg.decode('ascii')) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView._err", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_XDECREF(__pyx_v_error); - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1268 - * - * @cname('__pyx_memoryview_copy_contents') - * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice dst, - * int src_ndim, int dst_ndim, - */ - -static int __pyx_memoryview_copy_contents(__Pyx_memviewslice __pyx_v_src, __Pyx_memviewslice __pyx_v_dst, int __pyx_v_src_ndim, int __pyx_v_dst_ndim, int __pyx_v_dtype_is_object) { - void *__pyx_v_tmpdata; - size_t __pyx_v_itemsize; - int __pyx_v_i; - char __pyx_v_order; - int __pyx_v_broadcasting; - int __pyx_v_direct_copy; - __Pyx_memviewslice __pyx_v_tmp; - int __pyx_v_ndim; - int __pyx_r; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - void *__pyx_t_7; - int __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":1276 - * Check for overlapping memory and verify the shapes. - * """ - * cdef void *tmpdata = NULL # <<<<<<<<<<<<<< - * cdef size_t itemsize = src.memview.view.itemsize - * cdef int i - */ - __pyx_v_tmpdata = NULL; - - /* "View.MemoryView":1277 - * """ - * cdef void *tmpdata = NULL - * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<< - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) - */ - __pyx_t_1 = __pyx_v_src.memview->view.itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":1279 - * cdef size_t itemsize = src.memview.view.itemsize - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) # <<<<<<<<<<<<<< - * cdef bint broadcasting = False - * cdef bint direct_copy = False - */ - __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_src), __pyx_v_src_ndim); - - /* "View.MemoryView":1280 - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) - * cdef bint broadcasting = False # <<<<<<<<<<<<<< - * cdef bint direct_copy = False - * cdef __Pyx_memviewslice tmp - */ - __pyx_v_broadcasting = 0; - - /* "View.MemoryView":1281 - * cdef char order = get_best_order(&src, src_ndim) - * cdef bint broadcasting = False - * cdef bint direct_copy = False # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice tmp - * - */ - __pyx_v_direct_copy = 0; - - /* "View.MemoryView":1284 - * cdef __Pyx_memviewslice tmp - * - * if src_ndim < dst_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - */ - __pyx_t_2 = ((__pyx_v_src_ndim < __pyx_v_dst_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1285 - * - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) # <<<<<<<<<<<<<< - * elif dst_ndim < src_ndim: - * broadcast_leading(&dst, dst_ndim, src_ndim) - */ - __pyx_memoryview_broadcast_leading((&__pyx_v_src), __pyx_v_src_ndim, __pyx_v_dst_ndim); - - /* "View.MemoryView":1284 - * cdef __Pyx_memviewslice tmp - * - * if src_ndim < dst_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1286 - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - */ - __pyx_t_2 = ((__pyx_v_dst_ndim < __pyx_v_src_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1287 - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - * broadcast_leading(&dst, dst_ndim, src_ndim) # <<<<<<<<<<<<<< - * - * cdef int ndim = max(src_ndim, dst_ndim) - */ - __pyx_memoryview_broadcast_leading((&__pyx_v_dst), __pyx_v_dst_ndim, __pyx_v_src_ndim); - - /* "View.MemoryView":1286 - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - */ - } - __pyx_L3:; - - /* "View.MemoryView":1289 - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - * cdef int ndim = max(src_ndim, dst_ndim) # <<<<<<<<<<<<<< - * - * for i in range(ndim): - */ - __pyx_t_3 = __pyx_v_dst_ndim; - __pyx_t_4 = __pyx_v_src_ndim; - if (((__pyx_t_3 > __pyx_t_4) != 0)) { - __pyx_t_5 = __pyx_t_3; - } else { - __pyx_t_5 = __pyx_t_4; - } - __pyx_v_ndim = __pyx_t_5; - - /* "View.MemoryView":1291 - * cdef int ndim = max(src_ndim, dst_ndim) - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: - */ - __pyx_t_5 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_5; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1292 - * - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<< - * if src.shape[i] == 1: - * broadcasting = True - */ - __pyx_t_2 = (((__pyx_v_src.shape[__pyx_v_i]) != (__pyx_v_dst.shape[__pyx_v_i])) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1293 - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: # <<<<<<<<<<<<<< - * broadcasting = True - * src.strides[i] = 0 - */ - __pyx_t_2 = (((__pyx_v_src.shape[__pyx_v_i]) == 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1294 - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: - * broadcasting = True # <<<<<<<<<<<<<< - * src.strides[i] = 0 - * else: - */ - __pyx_v_broadcasting = 1; - - /* "View.MemoryView":1295 - * if src.shape[i] == 1: - * broadcasting = True - * src.strides[i] = 0 # <<<<<<<<<<<<<< - * else: - * _err_extents(i, dst.shape[i], src.shape[i]) - */ - (__pyx_v_src.strides[__pyx_v_i]) = 0; - - /* "View.MemoryView":1293 - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: # <<<<<<<<<<<<<< - * broadcasting = True - * src.strides[i] = 0 - */ - goto __pyx_L7; - } - - /* "View.MemoryView":1297 - * src.strides[i] = 0 - * else: - * _err_extents(i, dst.shape[i], src.shape[i]) # <<<<<<<<<<<<<< - * - * if src.suboffsets[i] >= 0: - */ - /*else*/ { - __pyx_t_6 = __pyx_memoryview_err_extents(__pyx_v_i, (__pyx_v_dst.shape[__pyx_v_i]), (__pyx_v_src.shape[__pyx_v_i])); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1297, __pyx_L1_error) - } - __pyx_L7:; - - /* "View.MemoryView":1292 - * - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<< - * if src.shape[i] == 1: - * broadcasting = True - */ - } - - /* "View.MemoryView":1299 - * _err_extents(i, dst.shape[i], src.shape[i]) - * - * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - */ - __pyx_t_2 = (((__pyx_v_src.suboffsets[__pyx_v_i]) >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1300 - * - * if src.suboffsets[i] >= 0: - * _err_dim(ValueError, "Dimension %d is not direct", i) # <<<<<<<<<<<<<< - * - * if slices_overlap(&src, &dst, ndim, itemsize): - */ - __pyx_t_6 = __pyx_memoryview_err_dim(__pyx_builtin_ValueError, ((char *)"Dimension %d is not direct"), __pyx_v_i); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1300, __pyx_L1_error) - - /* "View.MemoryView":1299 - * _err_extents(i, dst.shape[i], src.shape[i]) - * - * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - */ - } - } - - /* "View.MemoryView":1302 - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<< - * - * if not slice_is_contig(src, order, ndim): - */ - __pyx_t_2 = (__pyx_slices_overlap((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1304 - * if slices_overlap(&src, &dst, ndim, itemsize): - * - * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<< - * order = get_best_order(&dst, ndim) - * - */ - __pyx_t_2 = ((!(__pyx_memviewslice_is_contig(__pyx_v_src, __pyx_v_order, __pyx_v_ndim) != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1305 - * - * if not slice_is_contig(src, order, ndim): - * order = get_best_order(&dst, ndim) # <<<<<<<<<<<<<< - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) - */ - __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim); - - /* "View.MemoryView":1304 - * if slices_overlap(&src, &dst, ndim, itemsize): - * - * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<< - * order = get_best_order(&dst, ndim) - * - */ - } - - /* "View.MemoryView":1307 - * order = get_best_order(&dst, ndim) - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) # <<<<<<<<<<<<<< - * src = tmp - * - */ - __pyx_t_7 = __pyx_memoryview_copy_data_to_temp((&__pyx_v_src), (&__pyx_v_tmp), __pyx_v_order, __pyx_v_ndim); if (unlikely(__pyx_t_7 == ((void *)NULL))) __PYX_ERR(1, 1307, __pyx_L1_error) - __pyx_v_tmpdata = __pyx_t_7; - - /* "View.MemoryView":1308 - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) - * src = tmp # <<<<<<<<<<<<<< - * - * if not broadcasting: - */ - __pyx_v_src = __pyx_v_tmp; - - /* "View.MemoryView":1302 - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<< - * - * if not slice_is_contig(src, order, ndim): - */ - } - - /* "View.MemoryView":1310 - * src = tmp - * - * if not broadcasting: # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = ((!(__pyx_v_broadcasting != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1313 - * - * - * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig(__pyx_v_src, 'C', __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1314 - * - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) # <<<<<<<<<<<<<< - * elif slice_is_contig(src, 'F', ndim): - * direct_copy = slice_is_contig(dst, 'F', ndim) - */ - __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'C', __pyx_v_ndim); - - /* "View.MemoryView":1313 - * - * - * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - */ - goto __pyx_L12; - } - - /* "View.MemoryView":1315 - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig(__pyx_v_src, 'F', __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1316 - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - * direct_copy = slice_is_contig(dst, 'F', ndim) # <<<<<<<<<<<<<< - * - * if direct_copy: - */ - __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'F', __pyx_v_ndim); - - /* "View.MemoryView":1315 - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - */ - } - __pyx_L12:; - - /* "View.MemoryView":1318 - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - * if direct_copy: # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - __pyx_t_2 = (__pyx_v_direct_copy != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1320 - * if direct_copy: - * - * refcount_copying(&dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1321 - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) # <<<<<<<<<<<<<< - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) - */ - (void)(memcpy(__pyx_v_dst.data, __pyx_v_src.data, __pyx_memoryview_slice_get_size((&__pyx_v_src), __pyx_v_ndim))); - - /* "View.MemoryView":1322 - * refcount_copying(&dst, dtype_is_object, ndim, False) - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * free(tmpdata) - * return 0 - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1323 - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) # <<<<<<<<<<<<<< - * return 0 - * - */ - free(__pyx_v_tmpdata); - - /* "View.MemoryView":1324 - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) - * return 0 # <<<<<<<<<<<<<< - * - * if order == 'F' == get_best_order(&dst, ndim): - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":1318 - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - * if direct_copy: # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - } - - /* "View.MemoryView":1310 - * src = tmp - * - * if not broadcasting: # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":1326 - * return 0 - * - * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = (__pyx_v_order == 'F'); - if (__pyx_t_2) { - __pyx_t_2 = ('F' == __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim)); - } - __pyx_t_8 = (__pyx_t_2 != 0); - if (__pyx_t_8) { - - /* "View.MemoryView":1329 - * - * - * transpose_memslice(&src) # <<<<<<<<<<<<<< - * transpose_memslice(&dst) - * - */ - __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_src)); if (unlikely(__pyx_t_5 == ((int)0))) __PYX_ERR(1, 1329, __pyx_L1_error) - - /* "View.MemoryView":1330 - * - * transpose_memslice(&src) - * transpose_memslice(&dst) # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_dst)); if (unlikely(__pyx_t_5 == ((int)0))) __PYX_ERR(1, 1330, __pyx_L1_error) - - /* "View.MemoryView":1326 - * return 0 - * - * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":1332 - * transpose_memslice(&dst) - * - * refcount_copying(&dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * copy_strided_to_strided(&src, &dst, ndim, itemsize) - * refcount_copying(&dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1333 - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - * copy_strided_to_strided(&src, &dst, ndim, itemsize) # <<<<<<<<<<<<<< - * refcount_copying(&dst, dtype_is_object, ndim, True) - * - */ - copy_strided_to_strided((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize); - - /* "View.MemoryView":1334 - * refcount_copying(&dst, dtype_is_object, ndim, False) - * copy_strided_to_strided(&src, &dst, ndim, itemsize) - * refcount_copying(&dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * - * free(tmpdata) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1336 - * refcount_copying(&dst, dtype_is_object, ndim, True) - * - * free(tmpdata) # <<<<<<<<<<<<<< - * return 0 - * - */ - free(__pyx_v_tmpdata); - - /* "View.MemoryView":1337 - * - * free(tmpdata) - * return 0 # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_broadcast_leading') - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":1268 - * - * @cname('__pyx_memoryview_copy_contents') - * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice dst, - * int src_ndim, int dst_ndim, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.memoryview_copy_contents", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = -1; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1340 - * - * @cname('__pyx_memoryview_broadcast_leading') - * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<< - * int ndim, - * int ndim_other) nogil: - */ - -static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim, int __pyx_v_ndim_other) { - int __pyx_v_i; - int __pyx_v_offset; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - - /* "View.MemoryView":1344 - * int ndim_other) nogil: - * cdef int i - * cdef int offset = ndim_other - ndim # <<<<<<<<<<<<<< - * - * for i in range(ndim - 1, -1, -1): - */ - __pyx_v_offset = (__pyx_v_ndim_other - __pyx_v_ndim); - - /* "View.MemoryView":1346 - * cdef int offset = ndim_other - ndim - * - * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] - */ - for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":1347 - * - * for i in range(ndim - 1, -1, -1): - * mslice.shape[i + offset] = mslice.shape[i] # <<<<<<<<<<<<<< - * mslice.strides[i + offset] = mslice.strides[i] - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - */ - (__pyx_v_mslice->shape[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->shape[__pyx_v_i]); - - /* "View.MemoryView":1348 - * for i in range(ndim - 1, -1, -1): - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] # <<<<<<<<<<<<<< - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - * - */ - (__pyx_v_mslice->strides[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1349 - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] # <<<<<<<<<<<<<< - * - * for i in range(offset): - */ - (__pyx_v_mslice->suboffsets[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->suboffsets[__pyx_v_i]); - } - - /* "View.MemoryView":1351 - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - * - * for i in range(offset): # <<<<<<<<<<<<<< - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] - */ - __pyx_t_1 = __pyx_v_offset; - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_i = __pyx_t_3; - - /* "View.MemoryView":1352 - * - * for i in range(offset): - * mslice.shape[i] = 1 # <<<<<<<<<<<<<< - * mslice.strides[i] = mslice.strides[0] - * mslice.suboffsets[i] = -1 - */ - (__pyx_v_mslice->shape[__pyx_v_i]) = 1; - - /* "View.MemoryView":1353 - * for i in range(offset): - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] # <<<<<<<<<<<<<< - * mslice.suboffsets[i] = -1 - * - */ - (__pyx_v_mslice->strides[__pyx_v_i]) = (__pyx_v_mslice->strides[0]); - - /* "View.MemoryView":1354 - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] - * mslice.suboffsets[i] = -1 # <<<<<<<<<<<<<< - * - * - */ - (__pyx_v_mslice->suboffsets[__pyx_v_i]) = -1L; - } - - /* "View.MemoryView":1340 - * - * @cname('__pyx_memoryview_broadcast_leading') - * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<< - * int ndim, - * int ndim_other) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1362 - * - * @cname('__pyx_memoryview_refcount_copying') - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, # <<<<<<<<<<<<<< - * int ndim, bint inc) nogil: - * - */ - -static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_dtype_is_object, int __pyx_v_ndim, int __pyx_v_inc) { - int __pyx_t_1; - - /* "View.MemoryView":1366 - * - * - * if dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, - * dst.strides, ndim, inc) - */ - __pyx_t_1 = (__pyx_v_dtype_is_object != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1367 - * - * if dtype_is_object: - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, # <<<<<<<<<<<<<< - * dst.strides, ndim, inc) - * - */ - __pyx_memoryview_refcount_objects_in_slice_with_gil(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_inc); - - /* "View.MemoryView":1366 - * - * - * if dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, - * dst.strides, ndim, inc) - */ - } - - /* "View.MemoryView":1362 - * - * @cname('__pyx_memoryview_refcount_copying') - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, # <<<<<<<<<<<<<< - * int ndim, bint inc) nogil: - * - */ - - /* function exit code */ -} - -/* "View.MemoryView":1371 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - */ - -static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) { - __Pyx_RefNannyDeclarations - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("refcount_objects_in_slice_with_gil", 0); - - /* "View.MemoryView":1374 - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - * refcount_objects_in_slice(data, shape, strides, ndim, inc) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, __pyx_v_shape, __pyx_v_strides, __pyx_v_ndim, __pyx_v_inc); - - /* "View.MemoryView":1371 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif -} - -/* "View.MemoryView":1377 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, bint inc): - * cdef Py_ssize_t i - */ - -static void __pyx_memoryview_refcount_objects_in_slice(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - __Pyx_RefNannySetupContext("refcount_objects_in_slice", 0); - - /* "View.MemoryView":1381 - * cdef Py_ssize_t i - * - * for i in range(shape[0]): # <<<<<<<<<<<<<< - * if ndim == 1: - * if inc: - */ - __pyx_t_1 = (__pyx_v_shape[0]); - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_i = __pyx_t_3; - - /* "View.MemoryView":1382 - * - * for i in range(shape[0]): - * if ndim == 1: # <<<<<<<<<<<<<< - * if inc: - * Py_INCREF(( data)[0]) - */ - __pyx_t_4 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":1383 - * for i in range(shape[0]): - * if ndim == 1: - * if inc: # <<<<<<<<<<<<<< - * Py_INCREF(( data)[0]) - * else: - */ - __pyx_t_4 = (__pyx_v_inc != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":1384 - * if ndim == 1: - * if inc: - * Py_INCREF(( data)[0]) # <<<<<<<<<<<<<< - * else: - * Py_DECREF(( data)[0]) - */ - Py_INCREF((((PyObject **)__pyx_v_data)[0])); - - /* "View.MemoryView":1383 - * for i in range(shape[0]): - * if ndim == 1: - * if inc: # <<<<<<<<<<<<<< - * Py_INCREF(( data)[0]) - * else: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":1386 - * Py_INCREF(( data)[0]) - * else: - * Py_DECREF(( data)[0]) # <<<<<<<<<<<<<< - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, - */ - /*else*/ { - Py_DECREF((((PyObject **)__pyx_v_data)[0])); - } - __pyx_L6:; - - /* "View.MemoryView":1382 - * - * for i in range(shape[0]): - * if ndim == 1: # <<<<<<<<<<<<<< - * if inc: - * Py_INCREF(( data)[0]) - */ - goto __pyx_L5; - } - - /* "View.MemoryView":1388 - * Py_DECREF(( data)[0]) - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, # <<<<<<<<<<<<<< - * ndim - 1, inc) - * - */ - /*else*/ { - - /* "View.MemoryView":1389 - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, - * ndim - 1, inc) # <<<<<<<<<<<<<< - * - * data += strides[0] - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_inc); - } - __pyx_L5:; - - /* "View.MemoryView":1391 - * ndim - 1, inc) - * - * data += strides[0] # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_data = (__pyx_v_data + (__pyx_v_strides[0])); - } - - /* "View.MemoryView":1377 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, bint inc): - * cdef Py_ssize_t i - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":1397 - * - * @cname('__pyx_memoryview_slice_assign_scalar') - * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<< - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - */ - -static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item, int __pyx_v_dtype_is_object) { - - /* "View.MemoryView":1400 - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - * refcount_copying(dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, - * itemsize, item) - */ - __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1401 - * bint dtype_is_object) nogil: - * refcount_copying(dst, dtype_is_object, ndim, False) - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, # <<<<<<<<<<<<<< - * itemsize, item) - * refcount_copying(dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview__slice_assign_scalar(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_itemsize, __pyx_v_item); - - /* "View.MemoryView":1403 - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, - * itemsize, item) - * refcount_copying(dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * - * - */ - __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1397 - * - * @cname('__pyx_memoryview_slice_assign_scalar') - * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<< - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1407 - * - * @cname('__pyx_memoryview__slice_assign_scalar') - * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * size_t itemsize, void *item) nogil: - */ - -static void __pyx_memoryview__slice_assign_scalar(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - Py_ssize_t __pyx_v_stride; - Py_ssize_t __pyx_v_extent; - int __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - Py_ssize_t __pyx_t_4; - - /* "View.MemoryView":1411 - * size_t itemsize, void *item) nogil: - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t extent = shape[0] - * - */ - __pyx_v_stride = (__pyx_v_strides[0]); - - /* "View.MemoryView":1412 - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] - * cdef Py_ssize_t extent = shape[0] # <<<<<<<<<<<<<< - * - * if ndim == 1: - */ - __pyx_v_extent = (__pyx_v_shape[0]); - - /* "View.MemoryView":1414 - * cdef Py_ssize_t extent = shape[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * for i in range(extent): - * memcpy(data, item, itemsize) - */ - __pyx_t_1 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1415 - * - * if ndim == 1: - * for i in range(extent): # <<<<<<<<<<<<<< - * memcpy(data, item, itemsize) - * data += stride - */ - __pyx_t_2 = __pyx_v_extent; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1416 - * if ndim == 1: - * for i in range(extent): - * memcpy(data, item, itemsize) # <<<<<<<<<<<<<< - * data += stride - * else: - */ - (void)(memcpy(__pyx_v_data, __pyx_v_item, __pyx_v_itemsize)); - - /* "View.MemoryView":1417 - * for i in range(extent): - * memcpy(data, item, itemsize) - * data += stride # <<<<<<<<<<<<<< - * else: - * for i in range(extent): - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - - /* "View.MemoryView":1414 - * cdef Py_ssize_t extent = shape[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * for i in range(extent): - * memcpy(data, item, itemsize) - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1419 - * data += stride - * else: - * for i in range(extent): # <<<<<<<<<<<<<< - * _slice_assign_scalar(data, shape + 1, strides + 1, - * ndim - 1, itemsize, item) - */ - /*else*/ { - __pyx_t_2 = __pyx_v_extent; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1420 - * else: - * for i in range(extent): - * _slice_assign_scalar(data, shape + 1, strides + 1, # <<<<<<<<<<<<<< - * ndim - 1, itemsize, item) - * data += stride - */ - __pyx_memoryview__slice_assign_scalar(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize, __pyx_v_item); - - /* "View.MemoryView":1422 - * _slice_assign_scalar(data, shape + 1, strides + 1, - * ndim - 1, itemsize, item) - * data += stride # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - } - __pyx_L3:; - - /* "View.MemoryView":1407 - * - * @cname('__pyx_memoryview__slice_assign_scalar') - * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * size_t itemsize, void *item) nogil: - */ - - /* function exit code */ -} - -/* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static PyMethodDef __pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum = {"__pyx_unpickle_Enum", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum, METH_VARARGS|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v___pyx_type = 0; - long __pyx_v___pyx_checksum; - PyObject *__pyx_v___pyx_state = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_type,&__pyx_n_s_pyx_checksum,&__pyx_n_s_pyx_state,0}; - PyObject* values[3] = {0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_type)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_checksum)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 1); __PYX_ERR(1, 1, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_state)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 2); __PYX_ERR(1, 1, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__pyx_unpickle_Enum") < 0)) __PYX_ERR(1, 1, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 3) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - } - __pyx_v___pyx_type = values[0]; - __pyx_v___pyx_checksum = __Pyx_PyInt_As_long(values[1]); if (unlikely((__pyx_v___pyx_checksum == (long)-1) && PyErr_Occurred())) __PYX_ERR(1, 1, __pyx_L3_error) - __pyx_v___pyx_state = values[2]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 1, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(__pyx_self, __pyx_v___pyx_type, __pyx_v___pyx_checksum, __pyx_v___pyx_state); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_v___pyx_PickleError = 0; - PyObject *__pyx_v___pyx_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum", 0); - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum != 0xb068931: # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - */ - __pyx_t_1 = ((__pyx_v___pyx_checksum != 0xb068931) != 0); - if (__pyx_t_1) { - - /* "(tree fragment)":5 - * cdef object __pyx_result - * if __pyx_checksum != 0xb068931: - * from pickle import PickleError as __pyx_PickleError # <<<<<<<<<<<<<< - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - */ - __pyx_t_2 = PyList_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_n_s_PickleError); - __Pyx_GIVEREF(__pyx_n_s_PickleError); - PyList_SET_ITEM(__pyx_t_2, 0, __pyx_n_s_PickleError); - __pyx_t_3 = __Pyx_Import(__pyx_n_s_pickle, __pyx_t_2, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_ImportFrom(__pyx_t_3, __pyx_n_s_PickleError); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_t_2); - __pyx_v___pyx_PickleError = __pyx_t_2; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "(tree fragment)":6 - * if __pyx_checksum != 0xb068931: - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) # <<<<<<<<<<<<<< - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: - */ - __pyx_t_2 = __Pyx_PyInt_From_long(__pyx_v___pyx_checksum); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Incompatible_checksums_s_vs_0xb0, __pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_INCREF(__pyx_v___pyx_PickleError); - __pyx_t_2 = __pyx_v___pyx_PickleError; __pyx_t_5 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_3 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_5, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 6, __pyx_L1_error) - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum != 0xb068931: # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - */ - } - - /* "(tree fragment)":7 - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) # <<<<<<<<<<<<<< - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_MemviewEnum_type), __pyx_n_s_new); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) { - __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2); - if (likely(__pyx_t_4)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_2, function); - } - } - __pyx_t_3 = (__pyx_t_4) ? __Pyx_PyObject_Call2Args(__pyx_t_2, __pyx_t_4, __pyx_v___pyx_type) : __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_v___pyx_type); - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v___pyx_result = __pyx_t_3; - __pyx_t_3 = 0; - - /* "(tree fragment)":8 - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - */ - __pyx_t_1 = (__pyx_v___pyx_state != Py_None); - __pyx_t_6 = (__pyx_t_1 != 0); - if (__pyx_t_6) { - - /* "(tree fragment)":9 - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) # <<<<<<<<<<<<<< - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - */ - if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 9, __pyx_L1_error) - __pyx_t_3 = __pyx_unpickle_Enum__set_state(((struct __pyx_MemviewEnum_obj *)__pyx_v___pyx_result), ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 9, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "(tree fragment)":8 - * raise __pyx_PickleError("Incompatible checksums (%s vs 0xb068931 = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - */ - } - - /* "(tree fragment)":10 - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result # <<<<<<<<<<<<<< - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v___pyx_result); - __pyx_r = __pyx_v___pyx_result; - goto __pyx_L0; - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v___pyx_PickleError); - __Pyx_XDECREF(__pyx_v___pyx_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - -static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *__pyx_v___pyx_result, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum__set_state", 0); - - /* "(tree fragment)":12 - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] # <<<<<<<<<<<<<< - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); - __PYX_ERR(1, 12, __pyx_L1_error) - } - __pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __Pyx_GOTREF(__pyx_v___pyx_result->name); - __Pyx_DECREF(__pyx_v___pyx_result->name); - __pyx_v___pyx_result->name = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":13 - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<< - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 13, __pyx_L1_error) - } - __pyx_t_3 = PyTuple_GET_SIZE(__pyx_v___pyx_state); if (unlikely(__pyx_t_3 == ((Py_ssize_t)-1))) __PYX_ERR(1, 13, __pyx_L1_error) - __pyx_t_4 = ((__pyx_t_3 > 1) != 0); - if (__pyx_t_4) { - } else { - __pyx_t_2 = __pyx_t_4; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_4 = __Pyx_HasAttr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 13, __pyx_L1_error) - __pyx_t_5 = (__pyx_t_4 != 0); - __pyx_t_2 = __pyx_t_5; - __pyx_L4_bool_binop_done:; - if (__pyx_t_2) { - - /* "(tree fragment)":14 - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - * __pyx_result.__dict__.update(__pyx_state[1]) # <<<<<<<<<<<<<< - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_update); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); - __PYX_ERR(1, 14, __pyx_L1_error) - } - __pyx_t_6 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_8 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - } - } - __pyx_t_1 = (__pyx_t_8) ? __Pyx_PyObject_Call2Args(__pyx_t_7, __pyx_t_8, __pyx_t_6) : __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_t_6); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":13 - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<< - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - } - - /* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum__set_state", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static struct __pyx_vtabstruct_array __pyx_vtable_array; - -static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_array_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_array_obj *)o); - p->__pyx_vtab = __pyx_vtabptr_array; - p->mode = ((PyObject*)Py_None); Py_INCREF(Py_None); - p->_format = ((PyObject*)Py_None); Py_INCREF(Py_None); - if (unlikely(__pyx_array___cinit__(o, a, k) < 0)) goto bad; - return o; - bad: - Py_DECREF(o); o = 0; - return NULL; -} - -static void __pyx_tp_dealloc_array(PyObject *o) { - struct __pyx_array_obj *p = (struct __pyx_array_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && (!PyType_IS_GC(Py_TYPE(o)) || !_PyGC_FINALIZED(o))) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_array___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->mode); - Py_CLEAR(p->_format); - (*Py_TYPE(o)->tp_free)(o); -} -static PyObject *__pyx_sq_item_array(PyObject *o, Py_ssize_t i) { - PyObject *r; - PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; - r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); - Py_DECREF(x); - return r; -} - -static int __pyx_mp_ass_subscript_array(PyObject *o, PyObject *i, PyObject *v) { - if (v) { - return __pyx_array___setitem__(o, i, v); - } - else { - PyErr_Format(PyExc_NotImplementedError, - "Subscript deletion not supported by %.200s", Py_TYPE(o)->tp_name); - return -1; - } -} - -static PyObject *__pyx_tp_getattro_array(PyObject *o, PyObject *n) { - PyObject *v = __Pyx_PyObject_GenericGetAttr(o, n); - if (!v && PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - v = __pyx_array___getattr__(o, n); - } - return v; -} - -static PyObject *__pyx_getprop___pyx_array_memview(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(o); -} - -static PyMethodDef __pyx_methods_array[] = { - {"__getattr__", (PyCFunction)__pyx_array___getattr__, METH_O|METH_COEXIST, 0}, - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_array_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_array_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets_array[] = { - {(char *)"memview", __pyx_getprop___pyx_array_memview, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PySequenceMethods __pyx_tp_as_sequence_array = { - __pyx_array___len__, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - __pyx_sq_item_array, /*sq_item*/ - 0, /*sq_slice*/ - 0, /*sq_ass_item*/ - 0, /*sq_ass_slice*/ - 0, /*sq_contains*/ - 0, /*sq_inplace_concat*/ - 0, /*sq_inplace_repeat*/ -}; - -static PyMappingMethods __pyx_tp_as_mapping_array = { - __pyx_array___len__, /*mp_length*/ - __pyx_array___getitem__, /*mp_subscript*/ - __pyx_mp_ass_subscript_array, /*mp_ass_subscript*/ -}; - -static PyBufferProcs __pyx_tp_as_buffer_array = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_array_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; - -static PyTypeObject __pyx_type___pyx_array = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.array", /*tp_name*/ - sizeof(struct __pyx_array_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_array, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - &__pyx_tp_as_sequence_array, /*tp_as_sequence*/ - &__pyx_tp_as_mapping_array, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - __pyx_tp_getattro_array, /*tp_getattro*/ - 0, /*tp_setattro*/ - &__pyx_tp_as_buffer_array, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/ - 0, /*tp_doc*/ - 0, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_array, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets_array, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_array, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif -}; - -static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - struct __pyx_MemviewEnum_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_MemviewEnum_obj *)o); - p->name = Py_None; Py_INCREF(Py_None); - return o; -} - -static void __pyx_tp_dealloc_Enum(PyObject *o) { - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - Py_CLEAR(p->name); - (*Py_TYPE(o)->tp_free)(o); -} - -static int __pyx_tp_traverse_Enum(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - if (p->name) { - e = (*v)(p->name, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_Enum(PyObject *o) { - PyObject* tmp; - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - tmp = ((PyObject*)p->name); - p->name = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} - -static PyMethodDef __pyx_methods_Enum[] = { - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_MemviewEnum_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_MemviewEnum_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static PyTypeObject __pyx_type___pyx_MemviewEnum = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.Enum", /*tp_name*/ - sizeof(struct __pyx_MemviewEnum_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_Enum, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - __pyx_MemviewEnum___repr__, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_Enum, /*tp_traverse*/ - __pyx_tp_clear_Enum, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_Enum, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - __pyx_MemviewEnum___init__, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_Enum, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif -}; -static struct __pyx_vtabstruct_memoryview __pyx_vtable_memoryview; - -static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_memoryview_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_memoryview_obj *)o); - p->__pyx_vtab = __pyx_vtabptr_memoryview; - p->obj = Py_None; Py_INCREF(Py_None); - p->_size = Py_None; Py_INCREF(Py_None); - p->_array_interface = Py_None; Py_INCREF(Py_None); - p->view.obj = NULL; - if (unlikely(__pyx_memoryview___cinit__(o, a, k) < 0)) goto bad; - return o; - bad: - Py_DECREF(o); o = 0; - return NULL; -} - -static void __pyx_tp_dealloc_memoryview(PyObject *o) { - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_memoryview___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->obj); - Py_CLEAR(p->_size); - Py_CLEAR(p->_array_interface); - (*Py_TYPE(o)->tp_free)(o); -} - -static int __pyx_tp_traverse_memoryview(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - if (p->obj) { - e = (*v)(p->obj, a); if (e) return e; - } - if (p->_size) { - e = (*v)(p->_size, a); if (e) return e; - } - if (p->_array_interface) { - e = (*v)(p->_array_interface, a); if (e) return e; - } - if (p->view.obj) { - e = (*v)(p->view.obj, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_memoryview(PyObject *o) { - PyObject* tmp; - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - tmp = ((PyObject*)p->obj); - p->obj = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->_size); - p->_size = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->_array_interface); - p->_array_interface = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - Py_CLEAR(p->view.obj); - return 0; -} -static PyObject *__pyx_sq_item_memoryview(PyObject *o, Py_ssize_t i) { - PyObject *r; - PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; - r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); - Py_DECREF(x); - return r; -} - -static int __pyx_mp_ass_subscript_memoryview(PyObject *o, PyObject *i, PyObject *v) { - if (v) { - return __pyx_memoryview___setitem__(o, i, v); - } - else { - PyErr_Format(PyExc_NotImplementedError, - "Subscript deletion not supported by %.200s", Py_TYPE(o)->tp_name); - return -1; - } -} - -static PyObject *__pyx_getprop___pyx_memoryview_T(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_base(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_shape(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_strides(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_suboffsets(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_ndim(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_itemsize(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_nbytes(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_size(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(o); -} - -static PyMethodDef __pyx_methods_memoryview[] = { - {"is_c_contig", (PyCFunction)__pyx_memoryview_is_c_contig, METH_NOARGS, 0}, - {"is_f_contig", (PyCFunction)__pyx_memoryview_is_f_contig, METH_NOARGS, 0}, - {"copy", (PyCFunction)__pyx_memoryview_copy, METH_NOARGS, 0}, - {"copy_fortran", (PyCFunction)__pyx_memoryview_copy_fortran, METH_NOARGS, 0}, - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_memoryview_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_memoryview_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets_memoryview[] = { - {(char *)"T", __pyx_getprop___pyx_memoryview_T, 0, (char *)0, 0}, - {(char *)"base", __pyx_getprop___pyx_memoryview_base, 0, (char *)0, 0}, - {(char *)"shape", __pyx_getprop___pyx_memoryview_shape, 0, (char *)0, 0}, - {(char *)"strides", __pyx_getprop___pyx_memoryview_strides, 0, (char *)0, 0}, - {(char *)"suboffsets", __pyx_getprop___pyx_memoryview_suboffsets, 0, (char *)0, 0}, - {(char *)"ndim", __pyx_getprop___pyx_memoryview_ndim, 0, (char *)0, 0}, - {(char *)"itemsize", __pyx_getprop___pyx_memoryview_itemsize, 0, (char *)0, 0}, - {(char *)"nbytes", __pyx_getprop___pyx_memoryview_nbytes, 0, (char *)0, 0}, - {(char *)"size", __pyx_getprop___pyx_memoryview_size, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PySequenceMethods __pyx_tp_as_sequence_memoryview = { - __pyx_memoryview___len__, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - __pyx_sq_item_memoryview, /*sq_item*/ - 0, /*sq_slice*/ - 0, /*sq_ass_item*/ - 0, /*sq_ass_slice*/ - 0, /*sq_contains*/ - 0, /*sq_inplace_concat*/ - 0, /*sq_inplace_repeat*/ -}; - -static PyMappingMethods __pyx_tp_as_mapping_memoryview = { - __pyx_memoryview___len__, /*mp_length*/ - __pyx_memoryview___getitem__, /*mp_subscript*/ - __pyx_mp_ass_subscript_memoryview, /*mp_ass_subscript*/ -}; - -static PyBufferProcs __pyx_tp_as_buffer_memoryview = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_memoryview_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; - -static PyTypeObject __pyx_type___pyx_memoryview = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core.memoryview", /*tp_name*/ - sizeof(struct __pyx_memoryview_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_memoryview, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - __pyx_memoryview___repr__, /*tp_repr*/ - 0, /*tp_as_number*/ - &__pyx_tp_as_sequence_memoryview, /*tp_as_sequence*/ - &__pyx_tp_as_mapping_memoryview, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - __pyx_memoryview___str__, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - &__pyx_tp_as_buffer_memoryview, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_memoryview, /*tp_traverse*/ - __pyx_tp_clear_memoryview, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_memoryview, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets_memoryview, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_memoryview, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif -}; -static struct __pyx_vtabstruct__memoryviewslice __pyx_vtable__memoryviewslice; - -static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_memoryviewslice_obj *p; - PyObject *o = __pyx_tp_new_memoryview(t, a, k); - if (unlikely(!o)) return 0; - p = ((struct __pyx_memoryviewslice_obj *)o); - p->__pyx_base.__pyx_vtab = (struct __pyx_vtabstruct_memoryview*)__pyx_vtabptr__memoryviewslice; - p->from_object = Py_None; Py_INCREF(Py_None); - p->from_slice.memview = NULL; - return o; -} - -static void __pyx_tp_dealloc__memoryviewslice(PyObject *o) { - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_memoryviewslice___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->from_object); - PyObject_GC_Track(o); - __pyx_tp_dealloc_memoryview(o); -} - -static int __pyx_tp_traverse__memoryviewslice(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - e = __pyx_tp_traverse_memoryview(o, v, a); if (e) return e; - if (p->from_object) { - e = (*v)(p->from_object, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear__memoryviewslice(PyObject *o) { - PyObject* tmp; - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - __pyx_tp_clear_memoryview(o); - tmp = ((PyObject*)p->from_object); - p->from_object = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - __PYX_XDEC_MEMVIEW(&p->from_slice, 1); - return 0; -} - -static PyObject *__pyx_getprop___pyx_memoryviewslice_base(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(o); -} - -static PyMethodDef __pyx_methods__memoryviewslice[] = { - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_memoryviewslice_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_memoryviewslice_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets__memoryviewslice[] = { - {(char *)"base", __pyx_getprop___pyx_memoryviewslice_base, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PyTypeObject __pyx_type___pyx_memoryviewslice = { - PyVarObject_HEAD_INIT(0, 0) - "monotonic_align.core._memoryviewslice", /*tp_name*/ - sizeof(struct __pyx_memoryviewslice_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc__memoryviewslice, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - #if CYTHON_COMPILING_IN_PYPY - __pyx_memoryview___repr__, /*tp_repr*/ - #else - 0, /*tp_repr*/ - #endif - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - #if CYTHON_COMPILING_IN_PYPY - __pyx_memoryview___str__, /*tp_str*/ - #else - 0, /*tp_str*/ - #endif - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - "Internal class for passing memoryview slices to Python", /*tp_doc*/ - __pyx_tp_traverse__memoryviewslice, /*tp_traverse*/ - __pyx_tp_clear__memoryviewslice, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods__memoryviewslice, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets__memoryviewslice, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new__memoryviewslice, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif -}; - -static PyMethodDef __pyx_methods[] = { - {"maximum_path_c", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_15monotonic_align_4core_1maximum_path_c, METH_VARARGS|METH_KEYWORDS, 0}, - {0, 0, 0, 0} -}; - -#if PY_MAJOR_VERSION >= 3 -#if CYTHON_PEP489_MULTI_PHASE_INIT -static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ -static int __pyx_pymod_exec_core(PyObject* module); /*proto*/ -static PyModuleDef_Slot __pyx_moduledef_slots[] = { - {Py_mod_create, (void*)__pyx_pymod_create}, - {Py_mod_exec, (void*)__pyx_pymod_exec_core}, - {0, NULL} -}; -#endif - -static struct PyModuleDef __pyx_moduledef = { - PyModuleDef_HEAD_INIT, - "core", - 0, /* m_doc */ - #if CYTHON_PEP489_MULTI_PHASE_INIT - 0, /* m_size */ - #else - -1, /* m_size */ - #endif - __pyx_methods /* m_methods */, - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_moduledef_slots, /* m_slots */ - #else - NULL, /* m_reload */ - #endif - NULL, /* m_traverse */ - NULL, /* m_clear */ - NULL /* m_free */ -}; -#endif -#ifndef CYTHON_SMALL_CODE -#if defined(__clang__) - #define CYTHON_SMALL_CODE -#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) - #define CYTHON_SMALL_CODE __attribute__((cold)) -#else - #define CYTHON_SMALL_CODE -#endif -#endif - -static __Pyx_StringTabEntry __pyx_string_tab[] = { - {&__pyx_n_s_ASCII, __pyx_k_ASCII, sizeof(__pyx_k_ASCII), 0, 0, 1, 1}, - {&__pyx_kp_s_Buffer_view_does_not_expose_stri, __pyx_k_Buffer_view_does_not_expose_stri, sizeof(__pyx_k_Buffer_view_does_not_expose_stri), 0, 0, 1, 0}, - {&__pyx_kp_s_Can_only_create_a_buffer_that_is, __pyx_k_Can_only_create_a_buffer_that_is, sizeof(__pyx_k_Can_only_create_a_buffer_that_is), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_assign_to_read_only_memor, __pyx_k_Cannot_assign_to_read_only_memor, sizeof(__pyx_k_Cannot_assign_to_read_only_memor), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_create_writable_memory_vi, __pyx_k_Cannot_create_writable_memory_vi, sizeof(__pyx_k_Cannot_create_writable_memory_vi), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_index_with_type_s, __pyx_k_Cannot_index_with_type_s, sizeof(__pyx_k_Cannot_index_with_type_s), 0, 0, 1, 0}, - {&__pyx_n_s_Ellipsis, __pyx_k_Ellipsis, sizeof(__pyx_k_Ellipsis), 0, 0, 1, 1}, - {&__pyx_kp_s_Empty_shape_tuple_for_cython_arr, __pyx_k_Empty_shape_tuple_for_cython_arr, sizeof(__pyx_k_Empty_shape_tuple_for_cython_arr), 0, 0, 1, 0}, - {&__pyx_kp_s_Incompatible_checksums_s_vs_0xb0, __pyx_k_Incompatible_checksums_s_vs_0xb0, sizeof(__pyx_k_Incompatible_checksums_s_vs_0xb0), 0, 0, 1, 0}, - {&__pyx_n_s_IndexError, __pyx_k_IndexError, sizeof(__pyx_k_IndexError), 0, 0, 1, 1}, - {&__pyx_kp_s_Indirect_dimensions_not_supporte, __pyx_k_Indirect_dimensions_not_supporte, sizeof(__pyx_k_Indirect_dimensions_not_supporte), 0, 0, 1, 0}, - {&__pyx_kp_s_Invalid_mode_expected_c_or_fortr, __pyx_k_Invalid_mode_expected_c_or_fortr, sizeof(__pyx_k_Invalid_mode_expected_c_or_fortr), 0, 0, 1, 0}, - {&__pyx_kp_s_Invalid_shape_in_axis_d_d, __pyx_k_Invalid_shape_in_axis_d_d, sizeof(__pyx_k_Invalid_shape_in_axis_d_d), 0, 0, 1, 0}, - {&__pyx_n_s_MemoryError, __pyx_k_MemoryError, sizeof(__pyx_k_MemoryError), 0, 0, 1, 1}, - {&__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_k_MemoryView_of_r_at_0x_x, sizeof(__pyx_k_MemoryView_of_r_at_0x_x), 0, 0, 1, 0}, - {&__pyx_kp_s_MemoryView_of_r_object, __pyx_k_MemoryView_of_r_object, sizeof(__pyx_k_MemoryView_of_r_object), 0, 0, 1, 0}, - {&__pyx_n_b_O, __pyx_k_O, sizeof(__pyx_k_O), 0, 0, 0, 1}, - {&__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_k_Out_of_bounds_on_buffer_access_a, sizeof(__pyx_k_Out_of_bounds_on_buffer_access_a), 0, 0, 1, 0}, - {&__pyx_n_s_PickleError, __pyx_k_PickleError, sizeof(__pyx_k_PickleError), 0, 0, 1, 1}, - {&__pyx_n_s_TypeError, __pyx_k_TypeError, sizeof(__pyx_k_TypeError), 0, 0, 1, 1}, - {&__pyx_kp_s_Unable_to_convert_item_to_object, __pyx_k_Unable_to_convert_item_to_object, sizeof(__pyx_k_Unable_to_convert_item_to_object), 0, 0, 1, 0}, - {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1}, - {&__pyx_n_s_View_MemoryView, __pyx_k_View_MemoryView, sizeof(__pyx_k_View_MemoryView), 0, 0, 1, 1}, - {&__pyx_n_s_allocate_buffer, __pyx_k_allocate_buffer, sizeof(__pyx_k_allocate_buffer), 0, 0, 1, 1}, - {&__pyx_n_s_base, __pyx_k_base, sizeof(__pyx_k_base), 0, 0, 1, 1}, - {&__pyx_n_s_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 0, 1, 1}, - {&__pyx_n_u_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 1, 0, 1}, - {&__pyx_n_s_class, __pyx_k_class, sizeof(__pyx_k_class), 0, 0, 1, 1}, - {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, - {&__pyx_kp_s_contiguous_and_direct, __pyx_k_contiguous_and_direct, sizeof(__pyx_k_contiguous_and_direct), 0, 0, 1, 0}, - {&__pyx_kp_s_contiguous_and_indirect, __pyx_k_contiguous_and_indirect, sizeof(__pyx_k_contiguous_and_indirect), 0, 0, 1, 0}, - {&__pyx_n_s_dict, __pyx_k_dict, sizeof(__pyx_k_dict), 0, 0, 1, 1}, - {&__pyx_n_s_dtype_is_object, __pyx_k_dtype_is_object, sizeof(__pyx_k_dtype_is_object), 0, 0, 1, 1}, - {&__pyx_n_s_encode, __pyx_k_encode, sizeof(__pyx_k_encode), 0, 0, 1, 1}, - {&__pyx_n_s_enumerate, __pyx_k_enumerate, sizeof(__pyx_k_enumerate), 0, 0, 1, 1}, - {&__pyx_n_s_error, __pyx_k_error, sizeof(__pyx_k_error), 0, 0, 1, 1}, - {&__pyx_n_s_flags, __pyx_k_flags, sizeof(__pyx_k_flags), 0, 0, 1, 1}, - {&__pyx_n_s_format, __pyx_k_format, sizeof(__pyx_k_format), 0, 0, 1, 1}, - {&__pyx_n_s_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 0, 1, 1}, - {&__pyx_n_u_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 1, 0, 1}, - {&__pyx_n_s_getstate, __pyx_k_getstate, sizeof(__pyx_k_getstate), 0, 0, 1, 1}, - {&__pyx_kp_s_got_differing_extents_in_dimensi, __pyx_k_got_differing_extents_in_dimensi, sizeof(__pyx_k_got_differing_extents_in_dimensi), 0, 0, 1, 0}, - {&__pyx_n_s_id, __pyx_k_id, sizeof(__pyx_k_id), 0, 0, 1, 1}, - {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, - {&__pyx_n_s_itemsize, __pyx_k_itemsize, sizeof(__pyx_k_itemsize), 0, 0, 1, 1}, - {&__pyx_kp_s_itemsize_0_for_cython_array, __pyx_k_itemsize_0_for_cython_array, sizeof(__pyx_k_itemsize_0_for_cython_array), 0, 0, 1, 0}, - {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, - {&__pyx_n_s_memview, __pyx_k_memview, sizeof(__pyx_k_memview), 0, 0, 1, 1}, - {&__pyx_n_s_mode, __pyx_k_mode, sizeof(__pyx_k_mode), 0, 0, 1, 1}, - {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, - {&__pyx_n_s_name_2, __pyx_k_name_2, sizeof(__pyx_k_name_2), 0, 0, 1, 1}, - {&__pyx_n_s_ndim, __pyx_k_ndim, sizeof(__pyx_k_ndim), 0, 0, 1, 1}, - {&__pyx_n_s_new, __pyx_k_new, sizeof(__pyx_k_new), 0, 0, 1, 1}, - {&__pyx_kp_s_no_default___reduce___due_to_non, __pyx_k_no_default___reduce___due_to_non, sizeof(__pyx_k_no_default___reduce___due_to_non), 0, 0, 1, 0}, - {&__pyx_n_s_obj, __pyx_k_obj, sizeof(__pyx_k_obj), 0, 0, 1, 1}, - {&__pyx_n_s_pack, __pyx_k_pack, sizeof(__pyx_k_pack), 0, 0, 1, 1}, - {&__pyx_n_s_paths, __pyx_k_paths, sizeof(__pyx_k_paths), 0, 0, 1, 1}, - {&__pyx_n_s_pickle, __pyx_k_pickle, sizeof(__pyx_k_pickle), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_PickleError, __pyx_k_pyx_PickleError, sizeof(__pyx_k_pyx_PickleError), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_checksum, __pyx_k_pyx_checksum, sizeof(__pyx_k_pyx_checksum), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_getbuffer, __pyx_k_pyx_getbuffer, sizeof(__pyx_k_pyx_getbuffer), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_result, __pyx_k_pyx_result, sizeof(__pyx_k_pyx_result), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_state, __pyx_k_pyx_state, sizeof(__pyx_k_pyx_state), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_type, __pyx_k_pyx_type, sizeof(__pyx_k_pyx_type), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_unpickle_Enum, __pyx_k_pyx_unpickle_Enum, sizeof(__pyx_k_pyx_unpickle_Enum), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_vtable, __pyx_k_pyx_vtable, sizeof(__pyx_k_pyx_vtable), 0, 0, 1, 1}, - {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1}, - {&__pyx_n_s_reduce, __pyx_k_reduce, sizeof(__pyx_k_reduce), 0, 0, 1, 1}, - {&__pyx_n_s_reduce_cython, __pyx_k_reduce_cython, sizeof(__pyx_k_reduce_cython), 0, 0, 1, 1}, - {&__pyx_n_s_reduce_ex, __pyx_k_reduce_ex, sizeof(__pyx_k_reduce_ex), 0, 0, 1, 1}, - {&__pyx_n_s_setstate, __pyx_k_setstate, sizeof(__pyx_k_setstate), 0, 0, 1, 1}, - {&__pyx_n_s_setstate_cython, __pyx_k_setstate_cython, sizeof(__pyx_k_setstate_cython), 0, 0, 1, 1}, - {&__pyx_n_s_shape, __pyx_k_shape, sizeof(__pyx_k_shape), 0, 0, 1, 1}, - {&__pyx_n_s_size, __pyx_k_size, sizeof(__pyx_k_size), 0, 0, 1, 1}, - {&__pyx_n_s_start, __pyx_k_start, sizeof(__pyx_k_start), 0, 0, 1, 1}, - {&__pyx_n_s_step, __pyx_k_step, sizeof(__pyx_k_step), 0, 0, 1, 1}, - {&__pyx_n_s_stop, __pyx_k_stop, sizeof(__pyx_k_stop), 0, 0, 1, 1}, - {&__pyx_kp_s_strided_and_direct, __pyx_k_strided_and_direct, sizeof(__pyx_k_strided_and_direct), 0, 0, 1, 0}, - {&__pyx_kp_s_strided_and_direct_or_indirect, __pyx_k_strided_and_direct_or_indirect, sizeof(__pyx_k_strided_and_direct_or_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_strided_and_indirect, __pyx_k_strided_and_indirect, sizeof(__pyx_k_strided_and_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_stringsource, __pyx_k_stringsource, sizeof(__pyx_k_stringsource), 0, 0, 1, 0}, - {&__pyx_n_s_struct, __pyx_k_struct, sizeof(__pyx_k_struct), 0, 0, 1, 1}, - {&__pyx_n_s_t_xs, __pyx_k_t_xs, sizeof(__pyx_k_t_xs), 0, 0, 1, 1}, - {&__pyx_n_s_t_ys, __pyx_k_t_ys, sizeof(__pyx_k_t_ys), 0, 0, 1, 1}, - {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, - {&__pyx_kp_s_unable_to_allocate_array_data, __pyx_k_unable_to_allocate_array_data, sizeof(__pyx_k_unable_to_allocate_array_data), 0, 0, 1, 0}, - {&__pyx_kp_s_unable_to_allocate_shape_and_str, __pyx_k_unable_to_allocate_shape_and_str, sizeof(__pyx_k_unable_to_allocate_shape_and_str), 0, 0, 1, 0}, - {&__pyx_n_s_unpack, __pyx_k_unpack, sizeof(__pyx_k_unpack), 0, 0, 1, 1}, - {&__pyx_n_s_update, __pyx_k_update, sizeof(__pyx_k_update), 0, 0, 1, 1}, - {&__pyx_n_s_values, __pyx_k_values, sizeof(__pyx_k_values), 0, 0, 1, 1}, - {0, 0, 0, 0, 0, 0, 0} -}; -static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) { - __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(0, 15, __pyx_L1_error) - __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(1, 133, __pyx_L1_error) - __pyx_builtin_MemoryError = __Pyx_GetBuiltinName(__pyx_n_s_MemoryError); if (!__pyx_builtin_MemoryError) __PYX_ERR(1, 148, __pyx_L1_error) - __pyx_builtin_enumerate = __Pyx_GetBuiltinName(__pyx_n_s_enumerate); if (!__pyx_builtin_enumerate) __PYX_ERR(1, 151, __pyx_L1_error) - __pyx_builtin_TypeError = __Pyx_GetBuiltinName(__pyx_n_s_TypeError); if (!__pyx_builtin_TypeError) __PYX_ERR(1, 2, __pyx_L1_error) - __pyx_builtin_Ellipsis = __Pyx_GetBuiltinName(__pyx_n_s_Ellipsis); if (!__pyx_builtin_Ellipsis) __PYX_ERR(1, 404, __pyx_L1_error) - __pyx_builtin_id = __Pyx_GetBuiltinName(__pyx_n_s_id); if (!__pyx_builtin_id) __PYX_ERR(1, 613, __pyx_L1_error) - __pyx_builtin_IndexError = __Pyx_GetBuiltinName(__pyx_n_s_IndexError); if (!__pyx_builtin_IndexError) __PYX_ERR(1, 832, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); - - /* "View.MemoryView":133 - * - * if not self.ndim: - * raise ValueError("Empty shape tuple for cython.array") # <<<<<<<<<<<<<< - * - * if itemsize <= 0: - */ - __pyx_tuple__2 = PyTuple_Pack(1, __pyx_kp_s_Empty_shape_tuple_for_cython_arr); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(1, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__2); - __Pyx_GIVEREF(__pyx_tuple__2); - - /* "View.MemoryView":136 - * - * if itemsize <= 0: - * raise ValueError("itemsize <= 0 for cython.array") # <<<<<<<<<<<<<< - * - * if not isinstance(format, bytes): - */ - __pyx_tuple__3 = PyTuple_Pack(1, __pyx_kp_s_itemsize_0_for_cython_array); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(1, 136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__3); - __Pyx_GIVEREF(__pyx_tuple__3); - - /* "View.MemoryView":148 - * - * if not self._shape: - * raise MemoryError("unable to allocate shape and strides.") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__4 = PyTuple_Pack(1, __pyx_kp_s_unable_to_allocate_shape_and_str); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(1, 148, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__4); - __Pyx_GIVEREF(__pyx_tuple__4); - - /* "View.MemoryView":176 - * self.data = malloc(self.len) - * if not self.data: - * raise MemoryError("unable to allocate array data.") # <<<<<<<<<<<<<< - * - * if self.dtype_is_object: - */ - __pyx_tuple__5 = PyTuple_Pack(1, __pyx_kp_s_unable_to_allocate_array_data); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(1, 176, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__5); - __Pyx_GIVEREF(__pyx_tuple__5); - - /* "View.MemoryView":192 - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") # <<<<<<<<<<<<<< - * info.buf = self.data - * info.len = self.len - */ - __pyx_tuple__6 = PyTuple_Pack(1, __pyx_kp_s_Can_only_create_a_buffer_that_is); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(1, 192, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__6); - __Pyx_GIVEREF(__pyx_tuple__6); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__7 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__7); - __Pyx_GIVEREF(__pyx_tuple__7); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__8 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__8); - __Pyx_GIVEREF(__pyx_tuple__8); - - /* "View.MemoryView":418 - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") # <<<<<<<<<<<<<< - * - * have_slices, index = _unellipsify(index, self.view.ndim) - */ - __pyx_tuple__9 = PyTuple_Pack(1, __pyx_kp_s_Cannot_assign_to_read_only_memor); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(1, 418, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__9); - __Pyx_GIVEREF(__pyx_tuple__9); - - /* "View.MemoryView":495 - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - * raise ValueError("Unable to convert item to object") # <<<<<<<<<<<<<< - * else: - * if len(self.view.format) == 1: - */ - __pyx_tuple__10 = PyTuple_Pack(1, __pyx_kp_s_Unable_to_convert_item_to_object); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(1, 495, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__10); - __Pyx_GIVEREF(__pyx_tuple__10); - - /* "View.MemoryView":520 - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") # <<<<<<<<<<<<<< - * - * if flags & PyBUF_ND: - */ - __pyx_tuple__11 = PyTuple_Pack(1, __pyx_kp_s_Cannot_create_writable_memory_vi); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(1, 520, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__11); - __Pyx_GIVEREF(__pyx_tuple__11); - - /* "View.MemoryView":570 - * if self.view.strides == NULL: - * - * raise ValueError("Buffer view does not expose strides") # <<<<<<<<<<<<<< - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - */ - __pyx_tuple__12 = PyTuple_Pack(1, __pyx_kp_s_Buffer_view_does_not_expose_stri); if (unlikely(!__pyx_tuple__12)) __PYX_ERR(1, 570, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__12); - __Pyx_GIVEREF(__pyx_tuple__12); - - /* "View.MemoryView":577 - * def suboffsets(self): - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim # <<<<<<<<<<<<<< - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - */ - __pyx_tuple__13 = PyTuple_New(1); if (unlikely(!__pyx_tuple__13)) __PYX_ERR(1, 577, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__13); - __Pyx_INCREF(__pyx_int_neg_1); - __Pyx_GIVEREF(__pyx_int_neg_1); - PyTuple_SET_ITEM(__pyx_tuple__13, 0, __pyx_int_neg_1); - __Pyx_GIVEREF(__pyx_tuple__13); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__14 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__14); - __Pyx_GIVEREF(__pyx_tuple__14); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__15 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__15)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__15); - __Pyx_GIVEREF(__pyx_tuple__15); - - /* "View.MemoryView":682 - * if item is Ellipsis: - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) # <<<<<<<<<<<<<< - * seen_ellipsis = True - * else: - */ - __pyx_slice__16 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__16)) __PYX_ERR(1, 682, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - - /* "View.MemoryView":703 - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__17 = PyTuple_Pack(1, __pyx_kp_s_Indirect_dimensions_not_supporte); if (unlikely(!__pyx_tuple__17)) __PYX_ERR(1, 703, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__17); - __Pyx_GIVEREF(__pyx_tuple__17); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__18 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__18)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__18); - __Pyx_GIVEREF(__pyx_tuple__18); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__19 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__19)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__19); - __Pyx_GIVEREF(__pyx_tuple__19); - - /* "View.MemoryView":286 - * return self.name - * - * cdef generic = Enum("") # <<<<<<<<<<<<<< - * cdef strided = Enum("") # default - * cdef indirect = Enum("") - */ - __pyx_tuple__20 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct_or_indirect); if (unlikely(!__pyx_tuple__20)) __PYX_ERR(1, 286, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__20); - __Pyx_GIVEREF(__pyx_tuple__20); - - /* "View.MemoryView":287 - * - * cdef generic = Enum("") - * cdef strided = Enum("") # default # <<<<<<<<<<<<<< - * cdef indirect = Enum("") - * - */ - __pyx_tuple__21 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct); if (unlikely(!__pyx_tuple__21)) __PYX_ERR(1, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__21); - __Pyx_GIVEREF(__pyx_tuple__21); - - /* "View.MemoryView":288 - * cdef generic = Enum("") - * cdef strided = Enum("") # default - * cdef indirect = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__22 = PyTuple_Pack(1, __pyx_kp_s_strided_and_indirect); if (unlikely(!__pyx_tuple__22)) __PYX_ERR(1, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__22); - __Pyx_GIVEREF(__pyx_tuple__22); - - /* "View.MemoryView":291 - * - * - * cdef contiguous = Enum("") # <<<<<<<<<<<<<< - * cdef indirect_contiguous = Enum("") - * - */ - __pyx_tuple__23 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_direct); if (unlikely(!__pyx_tuple__23)) __PYX_ERR(1, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__23); - __Pyx_GIVEREF(__pyx_tuple__23); - - /* "View.MemoryView":292 - * - * cdef contiguous = Enum("") - * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__24 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_indirect); if (unlikely(!__pyx_tuple__24)) __PYX_ERR(1, 292, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__24); - __Pyx_GIVEREF(__pyx_tuple__24); - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - __pyx_tuple__25 = PyTuple_Pack(5, __pyx_n_s_pyx_type, __pyx_n_s_pyx_checksum, __pyx_n_s_pyx_state, __pyx_n_s_pyx_PickleError, __pyx_n_s_pyx_result); if (unlikely(!__pyx_tuple__25)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__25); - __Pyx_GIVEREF(__pyx_tuple__25); - __pyx_codeobj__26 = (PyObject*)__Pyx_PyCode_New(3, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__25, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_stringsource, __pyx_n_s_pyx_unpickle_Enum, 1, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__26)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) { - /* InitThreads.init */ - #ifdef WITH_THREAD -PyEval_InitThreads(); -#endif - -if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1, __pyx_L1_error) - - if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_184977713 = PyInt_FromLong(184977713L); if (unlikely(!__pyx_int_184977713)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_neg_1 = PyInt_FromLong(-1); if (unlikely(!__pyx_int_neg_1)) __PYX_ERR(0, 1, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/ - -static int __Pyx_modinit_global_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0); - /*--- Global init code ---*/ - generic = Py_None; Py_INCREF(Py_None); - strided = Py_None; Py_INCREF(Py_None); - indirect = Py_None; Py_INCREF(Py_None); - contiguous = Py_None; Py_INCREF(Py_None); - indirect_contiguous = Py_None; Py_INCREF(Py_None); - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0); - /*--- Variable export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0); - /*--- Function export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_init_code(void) { - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0); - /*--- Type init code ---*/ - __pyx_vtabptr_array = &__pyx_vtable_array; - __pyx_vtable_array.get_memview = (PyObject *(*)(struct __pyx_array_obj *))__pyx_array_get_memview; - if (PyType_Ready(&__pyx_type___pyx_array) < 0) __PYX_ERR(1, 105, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_array.tp_print = 0; - #endif - if (__Pyx_SetVtable(__pyx_type___pyx_array.tp_dict, __pyx_vtabptr_array) < 0) __PYX_ERR(1, 105, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_array) < 0) __PYX_ERR(1, 105, __pyx_L1_error) - __pyx_array_type = &__pyx_type___pyx_array; - if (PyType_Ready(&__pyx_type___pyx_MemviewEnum) < 0) __PYX_ERR(1, 279, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_MemviewEnum.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_MemviewEnum.tp_dictoffset && __pyx_type___pyx_MemviewEnum.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_MemviewEnum.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_MemviewEnum) < 0) __PYX_ERR(1, 279, __pyx_L1_error) - __pyx_MemviewEnum_type = &__pyx_type___pyx_MemviewEnum; - __pyx_vtabptr_memoryview = &__pyx_vtable_memoryview; - __pyx_vtable_memoryview.get_item_pointer = (char *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_get_item_pointer; - __pyx_vtable_memoryview.is_slice = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_is_slice; - __pyx_vtable_memoryview.setitem_slice_assignment = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_slice_assignment; - __pyx_vtable_memoryview.setitem_slice_assign_scalar = (PyObject *(*)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_setitem_slice_assign_scalar; - __pyx_vtable_memoryview.setitem_indexed = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_indexed; - __pyx_vtable_memoryview.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryview_convert_item_to_object; - __pyx_vtable_memoryview.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryview_assign_item_from_object; - if (PyType_Ready(&__pyx_type___pyx_memoryview) < 0) __PYX_ERR(1, 330, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_memoryview.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_memoryview.tp_dictoffset && __pyx_type___pyx_memoryview.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_memoryview.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_SetVtable(__pyx_type___pyx_memoryview.tp_dict, __pyx_vtabptr_memoryview) < 0) __PYX_ERR(1, 330, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_memoryview) < 0) __PYX_ERR(1, 330, __pyx_L1_error) - __pyx_memoryview_type = &__pyx_type___pyx_memoryview; - __pyx_vtabptr__memoryviewslice = &__pyx_vtable__memoryviewslice; - __pyx_vtable__memoryviewslice.__pyx_base = *__pyx_vtabptr_memoryview; - __pyx_vtable__memoryviewslice.__pyx_base.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryviewslice_convert_item_to_object; - __pyx_vtable__memoryviewslice.__pyx_base.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryviewslice_assign_item_from_object; - __pyx_type___pyx_memoryviewslice.tp_base = __pyx_memoryview_type; - if (PyType_Ready(&__pyx_type___pyx_memoryviewslice) < 0) __PYX_ERR(1, 965, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_memoryviewslice.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_memoryviewslice.tp_dictoffset && __pyx_type___pyx_memoryviewslice.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_memoryviewslice.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_SetVtable(__pyx_type___pyx_memoryviewslice.tp_dict, __pyx_vtabptr__memoryviewslice) < 0) __PYX_ERR(1, 965, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_memoryviewslice) < 0) __PYX_ERR(1, 965, __pyx_L1_error) - __pyx_memoryviewslice_type = &__pyx_type___pyx_memoryviewslice; - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static int __Pyx_modinit_type_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0); - /*--- Type import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0); - /*--- Variable import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0); - /*--- Function import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - - -#ifndef CYTHON_NO_PYINIT_EXPORT -#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC -#elif PY_MAJOR_VERSION < 3 -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" void -#else -#define __Pyx_PyMODINIT_FUNC void -#endif -#else -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" PyObject * -#else -#define __Pyx_PyMODINIT_FUNC PyObject * -#endif -#endif - - -#if PY_MAJOR_VERSION < 3 -__Pyx_PyMODINIT_FUNC initcore(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC initcore(void) -#else -__Pyx_PyMODINIT_FUNC PyInit_core(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC PyInit_core(void) -#if CYTHON_PEP489_MULTI_PHASE_INIT -{ - return PyModuleDef_Init(&__pyx_moduledef); -} -static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) { - #if PY_VERSION_HEX >= 0x030700A1 - static PY_INT64_T main_interpreter_id = -1; - PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp); - if (main_interpreter_id == -1) { - main_interpreter_id = current_id; - return (unlikely(current_id == -1)) ? -1 : 0; - } else if (unlikely(main_interpreter_id != current_id)) - #else - static PyInterpreterState *main_interpreter = NULL; - PyInterpreterState *current_interpreter = PyThreadState_Get()->interp; - if (!main_interpreter) { - main_interpreter = current_interpreter; - } else if (unlikely(main_interpreter != current_interpreter)) - #endif - { - PyErr_SetString( - PyExc_ImportError, - "Interpreter change detected - this module can only be loaded into one interpreter per process."); - return -1; - } - return 0; -} -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) { - PyObject *value = PyObject_GetAttrString(spec, from_name); - int result = 0; - if (likely(value)) { - if (allow_none || value != Py_None) { - result = PyDict_SetItemString(moddict, to_name, value); - } - Py_DECREF(value); - } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - } else { - result = -1; - } - return result; -} -static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) { - PyObject *module = NULL, *moddict, *modname; - if (__Pyx_check_single_interpreter()) - return NULL; - if (__pyx_m) - return __Pyx_NewRef(__pyx_m); - modname = PyObject_GetAttrString(spec, "name"); - if (unlikely(!modname)) goto bad; - module = PyModule_NewObject(modname); - Py_DECREF(modname); - if (unlikely(!module)) goto bad; - moddict = PyModule_GetDict(module); - if (unlikely(!moddict)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad; - return module; -bad: - Py_XDECREF(module); - return NULL; -} - - -static CYTHON_SMALL_CODE int __pyx_pymod_exec_core(PyObject *__pyx_pyinit_module) -#endif -#endif -{ - PyObject *__pyx_t_1 = NULL; - static PyThread_type_lock __pyx_t_2[8]; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - #if CYTHON_PEP489_MULTI_PHASE_INIT - if (__pyx_m) { - if (__pyx_m == __pyx_pyinit_module) return 0; - PyErr_SetString(PyExc_RuntimeError, "Module 'core' has already been imported. Re-initialisation is not supported."); - return -1; - } - #elif PY_MAJOR_VERSION >= 3 - if (__pyx_m) return __Pyx_NewRef(__pyx_m); - #endif - #if CYTHON_REFNANNY -__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); -if (!__Pyx_RefNanny) { - PyErr_Clear(); - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); - if (!__Pyx_RefNanny) - Py_FatalError("failed to import 'refnanny' module"); -} -#endif - __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_core(void)", 0); - if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pxy_PyFrame_Initialize_Offsets - __Pxy_PyFrame_Initialize_Offsets(); - #endif - __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pyx_CyFunction_USED - if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_FusedFunction_USED - if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Coroutine_USED - if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Generator_USED - if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_StopAsyncIteration_USED - if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - /*--- Library function declarations ---*/ - /*--- Threads initialization code ---*/ - #if defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS - #ifdef WITH_THREAD /* Python build with threading support? */ - PyEval_InitThreads(); - #endif - #endif - /*--- Module creation code ---*/ - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_m = __pyx_pyinit_module; - Py_INCREF(__pyx_m); - #else - #if PY_MAJOR_VERSION < 3 - __pyx_m = Py_InitModule4("core", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); - #else - __pyx_m = PyModule_Create(&__pyx_moduledef); - #endif - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_d); - __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_b); - __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_cython_runtime); - if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - /*--- Initialize various global constants etc. ---*/ - if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) - if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - if (__pyx_module_is_main_monotonic_align__core) { - if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name_2, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - } - #if PY_MAJOR_VERSION >= 3 - { - PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) - if (!PyDict_GetItemString(modules, "monotonic_align.core")) { - if (unlikely(PyDict_SetItemString(modules, "monotonic_align.core", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error) - } - } - #endif - /*--- Builtin init code ---*/ - if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Constants init code ---*/ - if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Global type/function init code ---*/ - (void)__Pyx_modinit_global_init_code(); - (void)__Pyx_modinit_variable_export_code(); - (void)__Pyx_modinit_function_export_code(); - if (unlikely(__Pyx_modinit_type_init_code() < 0)) __PYX_ERR(0, 1, __pyx_L1_error) - (void)__Pyx_modinit_type_import_code(); - (void)__Pyx_modinit_variable_import_code(); - (void)__Pyx_modinit_function_import_code(); - /*--- Execution code ---*/ - #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - - /* "monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - __pyx_k_ = (-1e9); - - /* "monotonic_align/core.pyx":1 - * cimport cython # <<<<<<<<<<<<<< - * from cython.parallel import prange - * - */ - __pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "View.MemoryView":209 - * info.obj = self - * - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * def __dealloc__(array self): - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_array_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 209, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_array_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 209, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_array_type); - - /* "View.MemoryView":286 - * return self.name - * - * cdef generic = Enum("") # <<<<<<<<<<<<<< - * cdef strided = Enum("") # default - * cdef indirect = Enum("") - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__20, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 286, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(generic); - __Pyx_DECREF_SET(generic, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":287 - * - * cdef generic = Enum("") - * cdef strided = Enum("") # default # <<<<<<<<<<<<<< - * cdef indirect = Enum("") - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__21, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(strided); - __Pyx_DECREF_SET(strided, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":288 - * cdef generic = Enum("") - * cdef strided = Enum("") # default - * cdef indirect = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__22, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(indirect); - __Pyx_DECREF_SET(indirect, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":291 - * - * - * cdef contiguous = Enum("") # <<<<<<<<<<<<<< - * cdef indirect_contiguous = Enum("") - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__23, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 291, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(contiguous); - __Pyx_DECREF_SET(contiguous, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":292 - * - * cdef contiguous = Enum("") - * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__24, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 292, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(indirect_contiguous); - __Pyx_DECREF_SET(indirect_contiguous, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":316 - * - * DEF THREAD_LOCKS_PREALLOCATED = 8 - * cdef int __pyx_memoryview_thread_locks_used = 0 # <<<<<<<<<<<<<< - * cdef PyThread_type_lock[THREAD_LOCKS_PREALLOCATED] __pyx_memoryview_thread_locks = [ - * PyThread_allocate_lock(), - */ - __pyx_memoryview_thread_locks_used = 0; - - /* "View.MemoryView":317 - * DEF THREAD_LOCKS_PREALLOCATED = 8 - * cdef int __pyx_memoryview_thread_locks_used = 0 - * cdef PyThread_type_lock[THREAD_LOCKS_PREALLOCATED] __pyx_memoryview_thread_locks = [ # <<<<<<<<<<<<<< - * PyThread_allocate_lock(), - * PyThread_allocate_lock(), - */ - __pyx_t_2[0] = PyThread_allocate_lock(); - __pyx_t_2[1] = PyThread_allocate_lock(); - __pyx_t_2[2] = PyThread_allocate_lock(); - __pyx_t_2[3] = PyThread_allocate_lock(); - __pyx_t_2[4] = PyThread_allocate_lock(); - __pyx_t_2[5] = PyThread_allocate_lock(); - __pyx_t_2[6] = PyThread_allocate_lock(); - __pyx_t_2[7] = PyThread_allocate_lock(); - memcpy(&(__pyx_memoryview_thread_locks[0]), __pyx_t_2, sizeof(__pyx_memoryview_thread_locks[0]) * (8)); - - /* "View.MemoryView":549 - * info.obj = self - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_memoryview_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 549, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_memoryview_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 549, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_memoryview_type); - - /* "View.MemoryView":995 - * return self.from_object - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_memoryview_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 995, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_memoryviewslice_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 995, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_memoryviewslice_type); - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum, NULL, __pyx_n_s_View_MemoryView); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_pyx_unpickle_Enum, __pyx_t_1) < 0) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - - /*--- Wrapped vars code ---*/ - - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - if (__pyx_m) { - if (__pyx_d) { - __Pyx_AddTraceback("init monotonic_align.core", __pyx_clineno, __pyx_lineno, __pyx_filename); - } - Py_CLEAR(__pyx_m); - } else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, "init monotonic_align.core"); - } - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - #if CYTHON_PEP489_MULTI_PHASE_INIT - return (__pyx_m != NULL) ? 0 : -1; - #elif PY_MAJOR_VERSION >= 3 - return __pyx_m; - #else - return; - #endif -} - -/* --- Runtime support code --- */ -/* Refnanny */ -#if CYTHON_REFNANNY -static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { - PyObject *m = NULL, *p = NULL; - void *r = NULL; - m = PyImport_ImportModule(modname); - if (!m) goto end; - p = PyObject_GetAttrString(m, "RefNannyAPI"); - if (!p) goto end; - r = PyLong_AsVoidPtr(p); -end: - Py_XDECREF(p); - Py_XDECREF(m); - return (__Pyx_RefNannyAPIStruct *)r; -} -#endif - -/* PyObjectGetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro)) - return tp->tp_getattro(obj, attr_name); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_getattr)) - return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); -#endif - return PyObject_GetAttr(obj, attr_name); -} -#endif - -/* GetBuiltinName */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name) { - PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name); - if (unlikely(!result)) { - PyErr_Format(PyExc_NameError, -#if PY_MAJOR_VERSION >= 3 - "name '%U' is not defined", name); -#else - "name '%.200s' is not defined", PyString_AS_STRING(name)); -#endif - } - return result; -} - -/* MemviewSliceInit */ -static int -__Pyx_init_memviewslice(struct __pyx_memoryview_obj *memview, - int ndim, - __Pyx_memviewslice *memviewslice, - int memview_is_new_reference) -{ - __Pyx_RefNannyDeclarations - int i, retval=-1; - Py_buffer *buf = &memview->view; - __Pyx_RefNannySetupContext("init_memviewslice", 0); - if (unlikely(memviewslice->memview || memviewslice->data)) { - PyErr_SetString(PyExc_ValueError, - "memviewslice is already initialized!"); - goto fail; - } - if (buf->strides) { - for (i = 0; i < ndim; i++) { - memviewslice->strides[i] = buf->strides[i]; - } - } else { - Py_ssize_t stride = buf->itemsize; - for (i = ndim - 1; i >= 0; i--) { - memviewslice->strides[i] = stride; - stride *= buf->shape[i]; - } - } - for (i = 0; i < ndim; i++) { - memviewslice->shape[i] = buf->shape[i]; - if (buf->suboffsets) { - memviewslice->suboffsets[i] = buf->suboffsets[i]; - } else { - memviewslice->suboffsets[i] = -1; - } - } - memviewslice->memview = memview; - memviewslice->data = (char *)buf->buf; - if (__pyx_add_acquisition_count(memview) == 0 && !memview_is_new_reference) { - Py_INCREF(memview); - } - retval = 0; - goto no_fail; -fail: - memviewslice->memview = 0; - memviewslice->data = 0; - retval = -1; -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} -#ifndef Py_NO_RETURN -#define Py_NO_RETURN -#endif -static void __pyx_fatalerror(const char *fmt, ...) Py_NO_RETURN { - va_list vargs; - char msg[200]; -#ifdef HAVE_STDARG_PROTOTYPES - va_start(vargs, fmt); -#else - va_start(vargs); -#endif - vsnprintf(msg, 200, fmt, vargs); - va_end(vargs); - Py_FatalError(msg); -} -static CYTHON_INLINE int -__pyx_add_acquisition_count_locked(__pyx_atomic_int *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)++; - PyThread_release_lock(lock); - return result; -} -static CYTHON_INLINE int -__pyx_sub_acquisition_count_locked(__pyx_atomic_int *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)--; - PyThread_release_lock(lock); - return result; -} -static CYTHON_INLINE void -__Pyx_INC_MEMVIEW(__Pyx_memviewslice *memslice, int have_gil, int lineno) -{ - int first_time; - struct __pyx_memoryview_obj *memview = memslice->memview; - if (unlikely(!memview || (PyObject *) memview == Py_None)) - return; - if (unlikely(__pyx_get_slice_count(memview) < 0)) - __pyx_fatalerror("Acquisition count is %d (line %d)", - __pyx_get_slice_count(memview), lineno); - first_time = __pyx_add_acquisition_count(memview) == 0; - if (unlikely(first_time)) { - if (have_gil) { - Py_INCREF((PyObject *) memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_INCREF((PyObject *) memview); - PyGILState_Release(_gilstate); - } - } -} -static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *memslice, - int have_gil, int lineno) { - int last_time; - struct __pyx_memoryview_obj *memview = memslice->memview; - if (unlikely(!memview || (PyObject *) memview == Py_None)) { - memslice->memview = NULL; - return; - } - if (unlikely(__pyx_get_slice_count(memview) <= 0)) - __pyx_fatalerror("Acquisition count is %d (line %d)", - __pyx_get_slice_count(memview), lineno); - last_time = __pyx_sub_acquisition_count(memview) == 1; - memslice->data = NULL; - if (unlikely(last_time)) { - if (have_gil) { - Py_CLEAR(memslice->memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_CLEAR(memslice->memview); - PyGILState_Release(_gilstate); - } - } else { - memslice->memview = NULL; - } -} - -/* RaiseArgTupleInvalid */ -static void __Pyx_RaiseArgtupleInvalid( - const char* func_name, - int exact, - Py_ssize_t num_min, - Py_ssize_t num_max, - Py_ssize_t num_found) -{ - Py_ssize_t num_expected; - const char *more_or_less; - if (num_found < num_min) { - num_expected = num_min; - more_or_less = "at least"; - } else { - num_expected = num_max; - more_or_less = "at most"; - } - if (exact) { - more_or_less = "exactly"; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", - func_name, more_or_less, num_expected, - (num_expected == 1) ? "" : "s", num_found); -} - -/* RaiseDoubleKeywords */ -static void __Pyx_RaiseDoubleKeywordsError( - const char* func_name, - PyObject* kw_name) -{ - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION >= 3 - "%s() got multiple values for keyword argument '%U'", func_name, kw_name); - #else - "%s() got multiple values for keyword argument '%s'", func_name, - PyString_AsString(kw_name)); - #endif -} - -/* ParseKeywords */ -static int __Pyx_ParseOptionalKeywords( - PyObject *kwds, - PyObject **argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - const char* function_name) -{ - PyObject *key = 0, *value = 0; - Py_ssize_t pos = 0; - PyObject*** name; - PyObject*** first_kw_arg = argnames + num_pos_args; - while (PyDict_Next(kwds, &pos, &key, &value)) { - name = first_kw_arg; - while (*name && (**name != key)) name++; - if (*name) { - values[name-argnames] = value; - continue; - } - name = first_kw_arg; - #if PY_MAJOR_VERSION < 3 - if (likely(PyString_Check(key))) { - while (*name) { - if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) - && _PyString_Eq(**name, key)) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - if ((**argname == key) || ( - (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) - && _PyString_Eq(**argname, key))) { - goto arg_passed_twice; - } - argname++; - } - } - } else - #endif - if (likely(PyUnicode_Check(key))) { - while (*name) { - int cmp = (**name == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**name) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**name, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - int cmp = (**argname == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**argname) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**argname, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) goto arg_passed_twice; - argname++; - } - } - } else - goto invalid_keyword_type; - if (kwds2) { - if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; - } else { - goto invalid_keyword; - } - } - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - goto bad; -invalid_keyword: - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION < 3 - "%.200s() got an unexpected keyword argument '%.200s'", - function_name, PyString_AsString(key)); - #else - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif -bad: - return -1; -} - -/* None */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname) { - PyErr_Format(PyExc_UnboundLocalError, "local variable '%s' referenced before assignment", varname); -} - -/* ArgTypeTest */ -static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact) -{ - if (unlikely(!type)) { - PyErr_SetString(PyExc_SystemError, "Missing type object"); - return 0; - } - else if (exact) { - #if PY_MAJOR_VERSION == 2 - if ((type == &PyBaseString_Type) && likely(__Pyx_PyBaseString_CheckExact(obj))) return 1; - #endif - } - else { - if (likely(__Pyx_TypeCheck(obj, type))) return 1; - } - PyErr_Format(PyExc_TypeError, - "Argument '%.200s' has incorrect type (expected %.200s, got %.200s)", - name, type->tp_name, Py_TYPE(obj)->tp_name); - return 0; -} - -/* PyObjectCall */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *result; - ternaryfunc call = func->ob_type->tp_call; - if (unlikely(!call)) - return PyObject_Call(func, arg, kw); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = (*call)(func, arg, kw); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyErrFetchRestore */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - tmp_type = tstate->curexc_type; - tmp_value = tstate->curexc_value; - tmp_tb = tstate->curexc_traceback; - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - *type = tstate->curexc_type; - *value = tstate->curexc_value; - *tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -} -#endif - -/* RaiseException */ -#if PY_MAJOR_VERSION < 3 -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, - CYTHON_UNUSED PyObject *cause) { - __Pyx_PyThreadState_declare - Py_XINCREF(type); - if (!value || value == Py_None) - value = NULL; - else - Py_INCREF(value); - if (!tb || tb == Py_None) - tb = NULL; - else { - Py_INCREF(tb); - if (!PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto raise_error; - } - } - if (PyType_Check(type)) { -#if CYTHON_COMPILING_IN_PYPY - if (!value) { - Py_INCREF(Py_None); - value = Py_None; - } -#endif - PyErr_NormalizeException(&type, &value, &tb); - } else { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto raise_error; - } - value = type; - type = (PyObject*) Py_TYPE(type); - Py_INCREF(type); - if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto raise_error; - } - } - __Pyx_PyThreadState_assign - __Pyx_ErrRestore(type, value, tb); - return; -raise_error: - Py_XDECREF(value); - Py_XDECREF(type); - Py_XDECREF(tb); - return; -} -#else -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - PyObject* owned_instance = NULL; - if (tb == Py_None) { - tb = 0; - } else if (tb && !PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto bad; - } - if (value == Py_None) - value = 0; - if (PyExceptionInstance_Check(type)) { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto bad; - } - value = type; - type = (PyObject*) Py_TYPE(value); - } else if (PyExceptionClass_Check(type)) { - PyObject *instance_class = NULL; - if (value && PyExceptionInstance_Check(value)) { - instance_class = (PyObject*) Py_TYPE(value); - if (instance_class != type) { - int is_subclass = PyObject_IsSubclass(instance_class, type); - if (!is_subclass) { - instance_class = NULL; - } else if (unlikely(is_subclass == -1)) { - goto bad; - } else { - type = instance_class; - } - } - } - if (!instance_class) { - PyObject *args; - if (!value) - args = PyTuple_New(0); - else if (PyTuple_Check(value)) { - Py_INCREF(value); - args = value; - } else - args = PyTuple_Pack(1, value); - if (!args) - goto bad; - owned_instance = PyObject_Call(type, args, NULL); - Py_DECREF(args); - if (!owned_instance) - goto bad; - value = owned_instance; - if (!PyExceptionInstance_Check(value)) { - PyErr_Format(PyExc_TypeError, - "calling %R should have returned an instance of " - "BaseException, not %R", - type, Py_TYPE(value)); - goto bad; - } - } - } else { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto bad; - } - if (cause) { - PyObject *fixed_cause; - if (cause == Py_None) { - fixed_cause = NULL; - } else if (PyExceptionClass_Check(cause)) { - fixed_cause = PyObject_CallObject(cause, NULL); - if (fixed_cause == NULL) - goto bad; - } else if (PyExceptionInstance_Check(cause)) { - fixed_cause = cause; - Py_INCREF(fixed_cause); - } else { - PyErr_SetString(PyExc_TypeError, - "exception causes must derive from " - "BaseException"); - goto bad; - } - PyException_SetCause(value, fixed_cause); - } - PyErr_SetObject(type, value); - if (tb) { -#if CYTHON_COMPILING_IN_PYPY - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); - Py_INCREF(tb); - PyErr_Restore(tmp_type, tmp_value, tb); - Py_XDECREF(tmp_tb); -#else - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* tmp_tb = tstate->curexc_traceback; - if (tb != tmp_tb) { - Py_INCREF(tb); - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_tb); - } -#endif - } -bad: - Py_XDECREF(owned_instance); - return; -} -#endif - -/* PyCFunctionFastCall */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *func_obj, PyObject **args, Py_ssize_t nargs) { - PyCFunctionObject *func = (PyCFunctionObject*)func_obj; - PyCFunction meth = PyCFunction_GET_FUNCTION(func); - PyObject *self = PyCFunction_GET_SELF(func); - int flags = PyCFunction_GET_FLAGS(func); - assert(PyCFunction_Check(func)); - assert(METH_FASTCALL == (flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))); - assert(nargs >= 0); - assert(nargs == 0 || args != NULL); - /* _PyCFunction_FastCallDict() must not be called with an exception set, - because it may clear it (directly or indirectly) and so the - caller loses its exception */ - assert(!PyErr_Occurred()); - if ((PY_VERSION_HEX < 0x030700A0) || unlikely(flags & METH_KEYWORDS)) { - return (*((__Pyx_PyCFunctionFastWithKeywords)(void*)meth)) (self, args, nargs, NULL); - } else { - return (*((__Pyx_PyCFunctionFast)(void*)meth)) (self, args, nargs); - } -} -#endif - -/* PyFunctionFastCall */ -#if CYTHON_FAST_PYCALL -static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, - PyObject *globals) { - PyFrameObject *f; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject **fastlocals; - Py_ssize_t i; - PyObject *result; - assert(globals != NULL); - /* XXX Perhaps we should create a specialized - PyFrame_New() that doesn't take locals, but does - take builtins without sanity checking them. - */ - assert(tstate != NULL); - f = PyFrame_New(tstate, co, globals, NULL); - if (f == NULL) { - return NULL; - } - fastlocals = __Pyx_PyFrame_GetLocalsplus(f); - for (i = 0; i < na; i++) { - Py_INCREF(*args); - fastlocals[i] = *args++; - } - result = PyEval_EvalFrameEx(f,0); - ++tstate->recursion_depth; - Py_DECREF(f); - --tstate->recursion_depth; - return result; -} -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) { - PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); - PyObject *globals = PyFunction_GET_GLOBALS(func); - PyObject *argdefs = PyFunction_GET_DEFAULTS(func); - PyObject *closure; -#if PY_MAJOR_VERSION >= 3 - PyObject *kwdefs; -#endif - PyObject *kwtuple, **k; - PyObject **d; - Py_ssize_t nd; - Py_ssize_t nk; - PyObject *result; - assert(kwargs == NULL || PyDict_Check(kwargs)); - nk = kwargs ? PyDict_Size(kwargs) : 0; - if (Py_EnterRecursiveCall((char*)" while calling a Python object")) { - return NULL; - } - if ( -#if PY_MAJOR_VERSION >= 3 - co->co_kwonlyargcount == 0 && -#endif - likely(kwargs == NULL || nk == 0) && - co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { - if (argdefs == NULL && co->co_argcount == nargs) { - result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); - goto done; - } - else if (nargs == 0 && argdefs != NULL - && co->co_argcount == Py_SIZE(argdefs)) { - /* function called with no arguments, but all parameters have - a default value: use default values as arguments .*/ - args = &PyTuple_GET_ITEM(argdefs, 0); - result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); - goto done; - } - } - if (kwargs != NULL) { - Py_ssize_t pos, i; - kwtuple = PyTuple_New(2 * nk); - if (kwtuple == NULL) { - result = NULL; - goto done; - } - k = &PyTuple_GET_ITEM(kwtuple, 0); - pos = i = 0; - while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { - Py_INCREF(k[i]); - Py_INCREF(k[i+1]); - i += 2; - } - nk = i / 2; - } - else { - kwtuple = NULL; - k = NULL; - } - closure = PyFunction_GET_CLOSURE(func); -#if PY_MAJOR_VERSION >= 3 - kwdefs = PyFunction_GET_KW_DEFAULTS(func); -#endif - if (argdefs != NULL) { - d = &PyTuple_GET_ITEM(argdefs, 0); - nd = Py_SIZE(argdefs); - } - else { - d = NULL; - nd = 0; - } -#if PY_MAJOR_VERSION >= 3 - result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, kwdefs, closure); -#else - result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, closure); -#endif - Py_XDECREF(kwtuple); -done: - Py_LeaveRecursiveCall(); - return result; -} -#endif -#endif - -/* PyObjectCall2Args */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) { - PyObject *args, *result = NULL; - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyFunction_FastCall(function, args, 2); - } - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyCFunction_FastCall(function, args, 2); - } - #endif - args = PyTuple_New(2); - if (unlikely(!args)) goto done; - Py_INCREF(arg1); - PyTuple_SET_ITEM(args, 0, arg1); - Py_INCREF(arg2); - PyTuple_SET_ITEM(args, 1, arg2); - Py_INCREF(function); - result = __Pyx_PyObject_Call(function, args, NULL); - Py_DECREF(args); - Py_DECREF(function); -done: - return result; -} - -/* PyObjectCallMethO */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { - PyObject *self, *result; - PyCFunction cfunc; - cfunc = PyCFunction_GET_FUNCTION(func); - self = PyCFunction_GET_SELF(func); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = cfunc(self, arg); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectCallOneArg */ -#if CYTHON_COMPILING_IN_CPYTHON -static PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_New(1); - if (unlikely(!args)) return NULL; - Py_INCREF(arg); - PyTuple_SET_ITEM(args, 0, arg); - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { -#if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCall(func, &arg, 1); - } -#endif - if (likely(PyCFunction_Check(func))) { - if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { - return __Pyx_PyObject_CallMethO(func, arg); -#if CYTHON_FAST_PYCCALL - } else if (PyCFunction_GET_FLAGS(func) & METH_FASTCALL) { - return __Pyx_PyCFunction_FastCall(func, &arg, 1); -#endif - } - } - return __Pyx__PyObject_CallOneArg(func, arg); -} -#else -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_Pack(1, arg); - if (unlikely(!args)) return NULL; - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -#endif - -/* BytesEquals */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else - if (s1 == s2) { - return (equals == Py_EQ); - } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) { - const char *ps1, *ps2; - Py_ssize_t length = PyBytes_GET_SIZE(s1); - if (length != PyBytes_GET_SIZE(s2)) - return (equals == Py_NE); - ps1 = PyBytes_AS_STRING(s1); - ps2 = PyBytes_AS_STRING(s2); - if (ps1[0] != ps2[0]) { - return (equals == Py_NE); - } else if (length == 1) { - return (equals == Py_EQ); - } else { - int result; -#if CYTHON_USE_UNICODE_INTERNALS - Py_hash_t hash1, hash2; - hash1 = ((PyBytesObject*)s1)->ob_shash; - hash2 = ((PyBytesObject*)s2)->ob_shash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - return (equals == Py_NE); - } -#endif - result = memcmp(ps1, ps2, (size_t)length); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) { - return (equals == Py_NE); - } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) { - return (equals == Py_NE); - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -#endif -} - -/* UnicodeEquals */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else -#if PY_MAJOR_VERSION < 3 - PyObject* owned_ref = NULL; -#endif - int s1_is_unicode, s2_is_unicode; - if (s1 == s2) { - goto return_eq; - } - s1_is_unicode = PyUnicode_CheckExact(s1); - s2_is_unicode = PyUnicode_CheckExact(s2); -#if PY_MAJOR_VERSION < 3 - if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) { - owned_ref = PyUnicode_FromObject(s2); - if (unlikely(!owned_ref)) - return -1; - s2 = owned_ref; - s2_is_unicode = 1; - } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) { - owned_ref = PyUnicode_FromObject(s1); - if (unlikely(!owned_ref)) - return -1; - s1 = owned_ref; - s1_is_unicode = 1; - } else if (((!s2_is_unicode) & (!s1_is_unicode))) { - return __Pyx_PyBytes_Equals(s1, s2, equals); - } -#endif - if (s1_is_unicode & s2_is_unicode) { - Py_ssize_t length; - int kind; - void *data1, *data2; - if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0)) - return -1; - length = __Pyx_PyUnicode_GET_LENGTH(s1); - if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) { - goto return_ne; - } -#if CYTHON_USE_UNICODE_INTERNALS - { - Py_hash_t hash1, hash2; - #if CYTHON_PEP393_ENABLED - hash1 = ((PyASCIIObject*)s1)->hash; - hash2 = ((PyASCIIObject*)s2)->hash; - #else - hash1 = ((PyUnicodeObject*)s1)->hash; - hash2 = ((PyUnicodeObject*)s2)->hash; - #endif - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - goto return_ne; - } - } -#endif - kind = __Pyx_PyUnicode_KIND(s1); - if (kind != __Pyx_PyUnicode_KIND(s2)) { - goto return_ne; - } - data1 = __Pyx_PyUnicode_DATA(s1); - data2 = __Pyx_PyUnicode_DATA(s2); - if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) { - goto return_ne; - } else if (length == 1) { - goto return_eq; - } else { - int result = memcmp(data1, data2, (size_t)(length * kind)); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & s2_is_unicode) { - goto return_ne; - } else if ((s2 == Py_None) & s1_is_unicode) { - goto return_ne; - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -return_eq: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ); -return_ne: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_NE); -#endif -} - -/* None */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t a, Py_ssize_t b) { - Py_ssize_t q = a / b; - Py_ssize_t r = a - q*b; - q -= ((r != 0) & ((r ^ b) < 0)); - return q; -} - -/* GetAttr */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *o, PyObject *n) { -#if CYTHON_USE_TYPE_SLOTS -#if PY_MAJOR_VERSION >= 3 - if (likely(PyUnicode_Check(n))) -#else - if (likely(PyString_Check(n))) -#endif - return __Pyx_PyObject_GetAttrStr(o, n); -#endif - return PyObject_GetAttr(o, n); -} - -/* GetItemInt */ -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { - PyObject *r; - if (!j) return NULL; - r = PyObject_GetItem(o, j); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyList_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) { - PyObject *r = PyList_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyTuple_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o); - if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) { - PyObject *r = PyList_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } - else if (PyTuple_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } else { - PySequenceMethods *m = Py_TYPE(o)->tp_as_sequence; - if (likely(m && m->sq_item)) { - if (wraparound && unlikely(i < 0) && likely(m->sq_length)) { - Py_ssize_t l = m->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return NULL; - PyErr_Clear(); - } - } - return m->sq_item(o, i); - } - } -#else - if (is_list || PySequence_Check(o)) { - return PySequence_GetItem(o, i); - } -#endif - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -} - -/* ObjectGetItem */ -#if CYTHON_USE_TYPE_SLOTS -static PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject* index) { - PyObject *runerr; - Py_ssize_t key_value; - PySequenceMethods *m = Py_TYPE(obj)->tp_as_sequence; - if (unlikely(!(m && m->sq_item))) { - PyErr_Format(PyExc_TypeError, "'%.200s' object is not subscriptable", Py_TYPE(obj)->tp_name); - return NULL; - } - key_value = __Pyx_PyIndex_AsSsize_t(index); - if (likely(key_value != -1 || !(runerr = PyErr_Occurred()))) { - return __Pyx_GetItemInt_Fast(obj, key_value, 0, 1, 1); - } - if (PyErr_GivenExceptionMatches(runerr, PyExc_OverflowError)) { - PyErr_Clear(); - PyErr_Format(PyExc_IndexError, "cannot fit '%.200s' into an index-sized integer", Py_TYPE(index)->tp_name); - } - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key) { - PyMappingMethods *m = Py_TYPE(obj)->tp_as_mapping; - if (likely(m && m->mp_subscript)) { - return m->mp_subscript(obj, key); - } - return __Pyx_PyObject_GetIndex(obj, key); -} -#endif - -/* decode_c_string */ -static CYTHON_INLINE PyObject* __Pyx_decode_c_string( - const char* cstring, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) { - Py_ssize_t length; - if (unlikely((start < 0) | (stop < 0))) { - size_t slen = strlen(cstring); - if (unlikely(slen > (size_t) PY_SSIZE_T_MAX)) { - PyErr_SetString(PyExc_OverflowError, - "c-string too long to convert to Python"); - return NULL; - } - length = (Py_ssize_t) slen; - if (start < 0) { - start += length; - if (start < 0) - start = 0; - } - if (stop < 0) - stop += length; - } - if (unlikely(stop <= start)) - return __Pyx_NewRef(__pyx_empty_unicode); - length = stop - start; - cstring += start; - if (decode_func) { - return decode_func(cstring, length, errors); - } else { - return PyUnicode_Decode(cstring, length, encoding, errors); - } -} - -/* PyErrExceptionMatches */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; icurexc_type; - if (exc_type == err) return 1; - if (unlikely(!exc_type)) return 0; - if (unlikely(PyTuple_Check(err))) - return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); - return __Pyx_PyErr_GivenExceptionMatches(exc_type, err); -} -#endif - -/* GetAttr3 */ -static PyObject *__Pyx_GetAttr3Default(PyObject *d) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (unlikely(!__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - return NULL; - __Pyx_PyErr_Clear(); - Py_INCREF(d); - return d; -} -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *o, PyObject *n, PyObject *d) { - PyObject *r = __Pyx_GetAttr(o, n); - return (likely(r)) ? r : __Pyx_GetAttr3Default(d); -} - -/* PyDictVersioning */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0; -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) { - PyObject **dictptr = NULL; - Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset; - if (offset) { -#if CYTHON_COMPILING_IN_CPYTHON - dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj); -#else - dictptr = _PyObject_GetDictPtr(obj); -#endif - } - return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0; -} -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict))) - return 0; - return obj_dict_version == __Pyx_get_object_dict_version(obj); -} -#endif - -/* GetModuleGlobalName */ -#if CYTHON_USE_DICT_VERSIONS -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value) -#else -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name) -#endif -{ - PyObject *result; -#if !CYTHON_AVOID_BORROWED_REFS -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 - result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } else if (unlikely(PyErr_Occurred())) { - return NULL; - } -#else - result = PyDict_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } -#endif -#else - result = PyObject_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } - PyErr_Clear(); -#endif - return __Pyx_GetBuiltinName(name); -} - -/* RaiseTooManyValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { - PyErr_Format(PyExc_ValueError, - "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); -} - -/* RaiseNeedMoreValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { - PyErr_Format(PyExc_ValueError, - "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", - index, (index == 1) ? "" : "s"); -} - -/* RaiseNoneIterError */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); -} - -/* ExtTypeTest */ -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) { - if (unlikely(!type)) { - PyErr_SetString(PyExc_SystemError, "Missing type object"); - return 0; - } - if (likely(__Pyx_TypeCheck(obj, type))) - return 1; - PyErr_Format(PyExc_TypeError, "Cannot convert %.200s to %.200s", - Py_TYPE(obj)->tp_name, type->tp_name); - return 0; -} - -/* GetTopmostException */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * -__Pyx_PyErr_GetTopmostException(PyThreadState *tstate) -{ - _PyErr_StackItem *exc_info = tstate->exc_info; - while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) && - exc_info->previous_item != NULL) - { - exc_info = exc_info->previous_item; - } - return exc_info; -} -#endif - -/* SaveResetException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - *type = exc_info->exc_type; - *value = exc_info->exc_value; - *tb = exc_info->exc_traceback; - #else - *type = tstate->exc_type; - *value = tstate->exc_value; - *tb = tstate->exc_traceback; - #endif - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); -} -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = type; - exc_info->exc_value = value; - exc_info->exc_traceback = tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = type; - tstate->exc_value = value; - tstate->exc_traceback = tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -#endif - -/* GetException */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) -#endif -{ - PyObject *local_type, *local_value, *local_tb; -#if CYTHON_FAST_THREAD_STATE - PyObject *tmp_type, *tmp_value, *tmp_tb; - local_type = tstate->curexc_type; - local_value = tstate->curexc_value; - local_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -#else - PyErr_Fetch(&local_type, &local_value, &local_tb); -#endif - PyErr_NormalizeException(&local_type, &local_value, &local_tb); -#if CYTHON_FAST_THREAD_STATE - if (unlikely(tstate->curexc_type)) -#else - if (unlikely(PyErr_Occurred())) -#endif - goto bad; - #if PY_MAJOR_VERSION >= 3 - if (local_tb) { - if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) - goto bad; - } - #endif - Py_XINCREF(local_tb); - Py_XINCREF(local_type); - Py_XINCREF(local_value); - *type = local_type; - *value = local_value; - *tb = local_tb; -#if CYTHON_FAST_THREAD_STATE - #if CYTHON_USE_EXC_INFO_STACK - { - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = local_type; - exc_info->exc_value = local_value; - exc_info->exc_traceback = local_tb; - } - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = local_type; - tstate->exc_value = local_value; - tstate->exc_traceback = local_tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#else - PyErr_SetExcInfo(local_type, local_value, local_tb); -#endif - return 0; -bad: - *type = 0; - *value = 0; - *tb = 0; - Py_XDECREF(local_type); - Py_XDECREF(local_value); - Py_XDECREF(local_tb); - return -1; -} - -/* SwapException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = *type; - exc_info->exc_value = *value; - exc_info->exc_traceback = *tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = *type; - tstate->exc_value = *value; - tstate->exc_traceback = *tb; - #endif - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_GetExcInfo(&tmp_type, &tmp_value, &tmp_tb); - PyErr_SetExcInfo(*type, *value, *tb); - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#endif - -/* Import */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { - PyObject *empty_list = 0; - PyObject *module = 0; - PyObject *global_dict = 0; - PyObject *empty_dict = 0; - PyObject *list; - #if PY_MAJOR_VERSION < 3 - PyObject *py_import; - py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); - if (!py_import) - goto bad; - #endif - if (from_list) - list = from_list; - else { - empty_list = PyList_New(0); - if (!empty_list) - goto bad; - list = empty_list; - } - global_dict = PyModule_GetDict(__pyx_m); - if (!global_dict) - goto bad; - empty_dict = PyDict_New(); - if (!empty_dict) - goto bad; - { - #if PY_MAJOR_VERSION >= 3 - if (level == -1) { - if ((1) && (strchr(__Pyx_MODULE_NAME, '.'))) { - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, 1); - if (!module) { - if (!PyErr_ExceptionMatches(PyExc_ImportError)) - goto bad; - PyErr_Clear(); - } - } - level = 0; - } - #endif - if (!module) { - #if PY_MAJOR_VERSION < 3 - PyObject *py_level = PyInt_FromLong(level); - if (!py_level) - goto bad; - module = PyObject_CallFunctionObjArgs(py_import, - name, global_dict, empty_dict, list, py_level, (PyObject *)NULL); - Py_DECREF(py_level); - #else - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, level); - #endif - } - } -bad: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_import); - #endif - Py_XDECREF(empty_list); - Py_XDECREF(empty_dict); - return module; -} - -/* FastTypeChecks */ -#if CYTHON_COMPILING_IN_CPYTHON -static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { - while (a) { - a = a->tp_base; - if (a == b) - return 1; - } - return b == &PyBaseObject_Type; -} -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (a == b) return 1; - mro = a->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(a, b); -} -#if PY_MAJOR_VERSION == 2 -static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { - PyObject *exception, *value, *tb; - int res; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&exception, &value, &tb); - res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - if (!res) { - res = PyObject_IsSubclass(err, exc_type2); - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - } - __Pyx_ErrRestore(exception, value, tb); - return res; -} -#else -static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { - int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0; - if (!res) { - res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); - } - return res; -} -#endif -static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - assert(PyExceptionClass_Check(exc_type)); - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; i= 0 || (x^b) >= 0)) - return PyInt_FromLong(x); - return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - const long b = intval; - long a, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG llb = intval; - PY_LONG_LONG lla, llx; -#endif - const digit* digits = ((PyLongObject*)op1)->ob_digit; - const Py_ssize_t size = Py_SIZE(op1); - if (likely(__Pyx_sst_abs(size) <= 1)) { - a = likely(size) ? digits[0] : 0; - if (size == -1) a = -a; - } else { - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - } - x = a + b; - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla + llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; - double a = PyFloat_AS_DOUBLE(op1); - double result; - PyFPE_START_PROTECT("add", return NULL) - result = ((double)a) + (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceAdd : PyNumber_Add)(op1, op2); -} -#endif - -/* None */ -static CYTHON_INLINE long __Pyx_div_long(long a, long b) { - long q = a / b; - long r = a - q*b; - q -= ((r != 0) & ((r ^ b) < 0)); - return q; -} - -/* ImportFrom */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) { - PyObject* value = __Pyx_PyObject_GetAttrStr(module, name); - if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Format(PyExc_ImportError, - #if PY_MAJOR_VERSION < 3 - "cannot import name %.230s", PyString_AS_STRING(name)); - #else - "cannot import name %S", name); - #endif - } - return value; -} - -/* HasAttr */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *o, PyObject *n) { - PyObject *r; - if (unlikely(!__Pyx_PyBaseString_Check(n))) { - PyErr_SetString(PyExc_TypeError, - "hasattr(): attribute name must be string"); - return -1; - } - r = __Pyx_GetAttr(o, n); - if (unlikely(!r)) { - PyErr_Clear(); - return 0; - } else { - Py_DECREF(r); - return 1; - } -} - -/* PyObject_GenericGetAttrNoDict */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, PyObject *attr_name) { - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'%.50s' object has no attribute '%U'", - tp->tp_name, attr_name); -#else - "'%.50s' object has no attribute '%.400s'", - tp->tp_name, PyString_AS_STRING(attr_name)); -#endif - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name) { - PyObject *descr; - PyTypeObject *tp = Py_TYPE(obj); - if (unlikely(!PyString_Check(attr_name))) { - return PyObject_GenericGetAttr(obj, attr_name); - } - assert(!tp->tp_dictoffset); - descr = _PyType_Lookup(tp, attr_name); - if (unlikely(!descr)) { - return __Pyx_RaiseGenericGetAttributeError(tp, attr_name); - } - Py_INCREF(descr); - #if PY_MAJOR_VERSION < 3 - if (likely(PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_HAVE_CLASS))) - #endif - { - descrgetfunc f = Py_TYPE(descr)->tp_descr_get; - if (unlikely(f)) { - PyObject *res = f(descr, obj, (PyObject *)tp); - Py_DECREF(descr); - return res; - } - } - return descr; -} -#endif - -/* PyObject_GenericGetAttr */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name) { - if (unlikely(Py_TYPE(obj)->tp_dictoffset)) { - return PyObject_GenericGetAttr(obj, attr_name); - } - return __Pyx_PyObject_GenericGetAttrNoDict(obj, attr_name); -} -#endif - -/* SetVTable */ -static int __Pyx_SetVtable(PyObject *dict, void *vtable) { -#if PY_VERSION_HEX >= 0x02070000 - PyObject *ob = PyCapsule_New(vtable, 0, 0); -#else - PyObject *ob = PyCObject_FromVoidPtr(vtable, 0); -#endif - if (!ob) - goto bad; - if (PyDict_SetItem(dict, __pyx_n_s_pyx_vtable, ob) < 0) - goto bad; - Py_DECREF(ob); - return 0; -bad: - Py_XDECREF(ob); - return -1; -} - -/* PyObjectGetAttrStrNoError */ -static void __Pyx_PyObject_GetAttrStr_ClearAttributeError(void) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (likely(__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - __Pyx_PyErr_Clear(); -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name) { - PyObject *result; -#if CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_TYPE_SLOTS && PY_VERSION_HEX >= 0x030700B1 - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro == PyObject_GenericGetAttr)) { - return _PyObject_GenericGetAttrWithDict(obj, attr_name, NULL, 1); - } -#endif - result = __Pyx_PyObject_GetAttrStr(obj, attr_name); - if (unlikely(!result)) { - __Pyx_PyObject_GetAttrStr_ClearAttributeError(); - } - return result; -} - -/* SetupReduce */ -static int __Pyx_setup_reduce_is_named(PyObject* meth, PyObject* name) { - int ret; - PyObject *name_attr; - name_attr = __Pyx_PyObject_GetAttrStr(meth, __pyx_n_s_name_2); - if (likely(name_attr)) { - ret = PyObject_RichCompareBool(name_attr, name, Py_EQ); - } else { - ret = -1; - } - if (unlikely(ret < 0)) { - PyErr_Clear(); - ret = 0; - } - Py_XDECREF(name_attr); - return ret; -} -static int __Pyx_setup_reduce(PyObject* type_obj) { - int ret = 0; - PyObject *object_reduce = NULL; - PyObject *object_reduce_ex = NULL; - PyObject *reduce = NULL; - PyObject *reduce_ex = NULL; - PyObject *reduce_cython = NULL; - PyObject *setstate = NULL; - PyObject *setstate_cython = NULL; -#if CYTHON_USE_PYTYPE_LOOKUP - if (_PyType_Lookup((PyTypeObject*)type_obj, __pyx_n_s_getstate)) goto __PYX_GOOD; -#else - if (PyObject_HasAttr(type_obj, __pyx_n_s_getstate)) goto __PYX_GOOD; -#endif -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce_ex = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD; -#else - object_reduce_ex = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD; -#endif - reduce_ex = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce_ex); if (unlikely(!reduce_ex)) goto __PYX_BAD; - if (reduce_ex == object_reduce_ex) { -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD; -#else - object_reduce = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD; -#endif - reduce = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce); if (unlikely(!reduce)) goto __PYX_BAD; - if (reduce == object_reduce || __Pyx_setup_reduce_is_named(reduce, __pyx_n_s_reduce_cython)) { - reduce_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_reduce_cython); - if (likely(reduce_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce, reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (reduce == object_reduce || PyErr_Occurred()) { - goto __PYX_BAD; - } - setstate = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_setstate); - if (!setstate) PyErr_Clear(); - if (!setstate || __Pyx_setup_reduce_is_named(setstate, __pyx_n_s_setstate_cython)) { - setstate_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_setstate_cython); - if (likely(setstate_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate, setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (!setstate || PyErr_Occurred()) { - goto __PYX_BAD; - } - } - PyType_Modified((PyTypeObject*)type_obj); - } - } - goto __PYX_GOOD; -__PYX_BAD: - if (!PyErr_Occurred()) - PyErr_Format(PyExc_RuntimeError, "Unable to initialize pickling for %s", ((PyTypeObject*)type_obj)->tp_name); - ret = -1; -__PYX_GOOD: -#if !CYTHON_USE_PYTYPE_LOOKUP - Py_XDECREF(object_reduce); - Py_XDECREF(object_reduce_ex); -#endif - Py_XDECREF(reduce); - Py_XDECREF(reduce_ex); - Py_XDECREF(reduce_cython); - Py_XDECREF(setstate); - Py_XDECREF(setstate_cython); - return ret; -} - -/* CLineInTraceback */ -#ifndef CYTHON_CLINE_IN_TRACEBACK -static int __Pyx_CLineForTraceback(CYTHON_NCP_UNUSED PyThreadState *tstate, int c_line) { - PyObject *use_cline; - PyObject *ptype, *pvalue, *ptraceback; -#if CYTHON_COMPILING_IN_CPYTHON - PyObject **cython_runtime_dict; -#endif - if (unlikely(!__pyx_cython_runtime)) { - return c_line; - } - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); -#if CYTHON_COMPILING_IN_CPYTHON - cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime); - if (likely(cython_runtime_dict)) { - __PYX_PY_DICT_LOOKUP_IF_MODIFIED( - use_cline, *cython_runtime_dict, - __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback)) - } else -#endif - { - PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback); - if (use_cline_obj) { - use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; - Py_DECREF(use_cline_obj); - } else { - PyErr_Clear(); - use_cline = NULL; - } - } - if (!use_cline) { - c_line = 0; - PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False); - } - else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) { - c_line = 0; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - return c_line; -} -#endif - -/* CodeObjectCache */ -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { - int start = 0, mid = 0, end = count - 1; - if (end >= 0 && code_line > entries[end].code_line) { - return count; - } - while (start < end) { - mid = start + (end - start) / 2; - if (code_line < entries[mid].code_line) { - end = mid; - } else if (code_line > entries[mid].code_line) { - start = mid + 1; - } else { - return mid; - } - } - if (code_line <= entries[mid].code_line) { - return mid; - } else { - return mid + 1; - } -} -static PyCodeObject *__pyx_find_code_object(int code_line) { - PyCodeObject* code_object; - int pos; - if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { - return NULL; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { - return NULL; - } - code_object = __pyx_code_cache.entries[pos].code_object; - Py_INCREF(code_object); - return code_object; -} -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { - int pos, i; - __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; - if (unlikely(!code_line)) { - return; - } - if (unlikely(!entries)) { - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); - if (likely(entries)) { - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = 64; - __pyx_code_cache.count = 1; - entries[0].code_line = code_line; - entries[0].code_object = code_object; - Py_INCREF(code_object); - } - return; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { - PyCodeObject* tmp = entries[pos].code_object; - entries[pos].code_object = code_object; - Py_DECREF(tmp); - return; - } - if (__pyx_code_cache.count == __pyx_code_cache.max_count) { - int new_max = __pyx_code_cache.max_count + 64; - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( - __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry)); - if (unlikely(!entries)) { - return; - } - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = new_max; - } - for (i=__pyx_code_cache.count; i>pos; i--) { - entries[i] = entries[i-1]; - } - entries[pos].code_line = code_line; - entries[pos].code_object = code_object; - __pyx_code_cache.count++; - Py_INCREF(code_object); -} - -/* AddTraceback */ -#include "compile.h" -#include "frameobject.h" -#include "traceback.h" -static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( - const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyObject *py_srcfile = 0; - PyObject *py_funcname = 0; - #if PY_MAJOR_VERSION < 3 - py_srcfile = PyString_FromString(filename); - #else - py_srcfile = PyUnicode_FromString(filename); - #endif - if (!py_srcfile) goto bad; - if (c_line) { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - #else - py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - #endif - } - else { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromString(funcname); - #else - py_funcname = PyUnicode_FromString(funcname); - #endif - } - if (!py_funcname) goto bad; - py_code = __Pyx_PyCode_New( - 0, - 0, - 0, - 0, - 0, - __pyx_empty_bytes, /*PyObject *code,*/ - __pyx_empty_tuple, /*PyObject *consts,*/ - __pyx_empty_tuple, /*PyObject *names,*/ - __pyx_empty_tuple, /*PyObject *varnames,*/ - __pyx_empty_tuple, /*PyObject *freevars,*/ - __pyx_empty_tuple, /*PyObject *cellvars,*/ - py_srcfile, /*PyObject *filename,*/ - py_funcname, /*PyObject *name,*/ - py_line, - __pyx_empty_bytes /*PyObject *lnotab*/ - ); - Py_DECREF(py_srcfile); - Py_DECREF(py_funcname); - return py_code; -bad: - Py_XDECREF(py_srcfile); - Py_XDECREF(py_funcname); - return NULL; -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyFrameObject *py_frame = 0; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - if (c_line) { - c_line = __Pyx_CLineForTraceback(tstate, c_line); - } - py_code = __pyx_find_code_object(c_line ? -c_line : py_line); - if (!py_code) { - py_code = __Pyx_CreateCodeObjectForTraceback( - funcname, c_line, py_line, filename); - if (!py_code) goto bad; - __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); - } - py_frame = PyFrame_New( - tstate, /*PyThreadState *tstate,*/ - py_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (!py_frame) goto bad; - __Pyx_PyFrame_SetLineNumber(py_frame, py_line); - PyTraceBack_Here(py_frame); -bad: - Py_XDECREF(py_code); - Py_XDECREF(py_frame); -} - -#if PY_MAJOR_VERSION < 3 -static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) { - if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags); - if (__Pyx_TypeCheck(obj, __pyx_array_type)) return __pyx_array_getbuffer(obj, view, flags); - if (__Pyx_TypeCheck(obj, __pyx_memoryview_type)) return __pyx_memoryview_getbuffer(obj, view, flags); - PyErr_Format(PyExc_TypeError, "'%.200s' does not have the buffer interface", Py_TYPE(obj)->tp_name); - return -1; -} -static void __Pyx_ReleaseBuffer(Py_buffer *view) { - PyObject *obj = view->obj; - if (!obj) return; - if (PyObject_CheckBuffer(obj)) { - PyBuffer_Release(view); - return; - } - if ((0)) {} - view->obj = NULL; - Py_DECREF(obj); -} -#endif - - -/* MemviewSliceIsContig */ -static int -__pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim) -{ - int i, index, step, start; - Py_ssize_t itemsize = mvs.memview->view.itemsize; - if (order == 'F') { - step = 1; - start = 0; - } else { - step = -1; - start = ndim - 1; - } - for (i = 0; i < ndim; i++) { - index = start + step * i; - if (mvs.suboffsets[index] >= 0 || mvs.strides[index] != itemsize) - return 0; - itemsize *= mvs.shape[index]; - } - return 1; -} - -/* OverlappingSlices */ -static void -__pyx_get_array_memory_extents(__Pyx_memviewslice *slice, - void **out_start, void **out_end, - int ndim, size_t itemsize) -{ - char *start, *end; - int i; - start = end = slice->data; - for (i = 0; i < ndim; i++) { - Py_ssize_t stride = slice->strides[i]; - Py_ssize_t extent = slice->shape[i]; - if (extent == 0) { - *out_start = *out_end = start; - return; - } else { - if (stride > 0) - end += stride * (extent - 1); - else - start += stride * (extent - 1); - } - } - *out_start = start; - *out_end = end + itemsize; -} -static int -__pyx_slices_overlap(__Pyx_memviewslice *slice1, - __Pyx_memviewslice *slice2, - int ndim, size_t itemsize) -{ - void *start1, *end1, *start2, *end2; - __pyx_get_array_memory_extents(slice1, &start1, &end1, ndim, itemsize); - __pyx_get_array_memory_extents(slice2, &start2, &end2, ndim, itemsize); - return (start1 < end2) && (start2 < end1); -} - -/* Capsule */ -static CYTHON_INLINE PyObject * -__pyx_capsule_create(void *p, CYTHON_UNUSED const char *sig) -{ - PyObject *cobj; -#if PY_VERSION_HEX >= 0x02070000 - cobj = PyCapsule_New(p, sig, NULL); -#else - cobj = PyCObject_FromVoidPtr(p, NULL); -#endif - return cobj; -} - -/* IsLittleEndian */ -static CYTHON_INLINE int __Pyx_Is_Little_Endian(void) -{ - union { - uint32_t u32; - uint8_t u8[4]; - } S; - S.u32 = 0x01020304; - return S.u8[0] == 4; -} - -/* BufferFormatCheck */ -static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, - __Pyx_BufFmt_StackElem* stack, - __Pyx_TypeInfo* type) { - stack[0].field = &ctx->root; - stack[0].parent_offset = 0; - ctx->root.type = type; - ctx->root.name = "buffer dtype"; - ctx->root.offset = 0; - ctx->head = stack; - ctx->head->field = &ctx->root; - ctx->fmt_offset = 0; - ctx->head->parent_offset = 0; - ctx->new_packmode = '@'; - ctx->enc_packmode = '@'; - ctx->new_count = 1; - ctx->enc_count = 0; - ctx->enc_type = 0; - ctx->is_complex = 0; - ctx->is_valid_array = 0; - ctx->struct_alignment = 0; - while (type->typegroup == 'S') { - ++ctx->head; - ctx->head->field = type->fields; - ctx->head->parent_offset = 0; - type = type->fields->type; - } -} -static int __Pyx_BufFmt_ParseNumber(const char** ts) { - int count; - const char* t = *ts; - if (*t < '0' || *t > '9') { - return -1; - } else { - count = *t++ - '0'; - while (*t >= '0' && *t <= '9') { - count *= 10; - count += *t++ - '0'; - } - } - *ts = t; - return count; -} -static int __Pyx_BufFmt_ExpectNumber(const char **ts) { - int number = __Pyx_BufFmt_ParseNumber(ts); - if (number == -1) - PyErr_Format(PyExc_ValueError,\ - "Does not understand character buffer dtype format string ('%c')", **ts); - return number; -} -static void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) { - PyErr_Format(PyExc_ValueError, - "Unexpected format string character: '%c'", ch); -} -static const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) { - switch (ch) { - case '?': return "'bool'"; - case 'c': return "'char'"; - case 'b': return "'signed char'"; - case 'B': return "'unsigned char'"; - case 'h': return "'short'"; - case 'H': return "'unsigned short'"; - case 'i': return "'int'"; - case 'I': return "'unsigned int'"; - case 'l': return "'long'"; - case 'L': return "'unsigned long'"; - case 'q': return "'long long'"; - case 'Q': return "'unsigned long long'"; - case 'f': return (is_complex ? "'complex float'" : "'float'"); - case 'd': return (is_complex ? "'complex double'" : "'double'"); - case 'g': return (is_complex ? "'complex long double'" : "'long double'"); - case 'T': return "a struct"; - case 'O': return "Python object"; - case 'P': return "a pointer"; - case 's': case 'p': return "a string"; - case 0: return "end"; - default: return "unparseable format string"; - } -} -static size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return 2; - case 'i': case 'I': case 'l': case 'L': return 4; - case 'q': case 'Q': return 8; - case 'f': return (is_complex ? 8 : 4); - case 'd': return (is_complex ? 16 : 8); - case 'g': { - PyErr_SetString(PyExc_ValueError, "Python does not define a standard format string size for long double ('g').."); - return 0; - } - case 'O': case 'P': return sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -static size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(short); - case 'i': case 'I': return sizeof(int); - case 'l': case 'L': return sizeof(long); - #ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(PY_LONG_LONG); - #endif - case 'f': return sizeof(float) * (is_complex ? 2 : 1); - case 'd': return sizeof(double) * (is_complex ? 2 : 1); - case 'g': return sizeof(long double) * (is_complex ? 2 : 1); - case 'O': case 'P': return sizeof(void*); - default: { - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } - } -} -typedef struct { char c; short x; } __Pyx_st_short; -typedef struct { char c; int x; } __Pyx_st_int; -typedef struct { char c; long x; } __Pyx_st_long; -typedef struct { char c; float x; } __Pyx_st_float; -typedef struct { char c; double x; } __Pyx_st_double; -typedef struct { char c; long double x; } __Pyx_st_longdouble; -typedef struct { char c; void *x; } __Pyx_st_void_p; -#ifdef HAVE_LONG_LONG -typedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong; -#endif -static size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, CYTHON_UNUSED int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short); - case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int); - case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long); -#ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(__Pyx_st_longlong) - sizeof(PY_LONG_LONG); -#endif - case 'f': return sizeof(__Pyx_st_float) - sizeof(float); - case 'd': return sizeof(__Pyx_st_double) - sizeof(double); - case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double); - case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -/* These are for computing the padding at the end of the struct to align - on the first member of the struct. This will probably the same as above, - but we don't have any guarantees. - */ -typedef struct { short x; char c; } __Pyx_pad_short; -typedef struct { int x; char c; } __Pyx_pad_int; -typedef struct { long x; char c; } __Pyx_pad_long; -typedef struct { float x; char c; } __Pyx_pad_float; -typedef struct { double x; char c; } __Pyx_pad_double; -typedef struct { long double x; char c; } __Pyx_pad_longdouble; -typedef struct { void *x; char c; } __Pyx_pad_void_p; -#ifdef HAVE_LONG_LONG -typedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong; -#endif -static size_t __Pyx_BufFmt_TypeCharToPadding(char ch, CYTHON_UNUSED int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(__Pyx_pad_short) - sizeof(short); - case 'i': case 'I': return sizeof(__Pyx_pad_int) - sizeof(int); - case 'l': case 'L': return sizeof(__Pyx_pad_long) - sizeof(long); -#ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(__Pyx_pad_longlong) - sizeof(PY_LONG_LONG); -#endif - case 'f': return sizeof(__Pyx_pad_float) - sizeof(float); - case 'd': return sizeof(__Pyx_pad_double) - sizeof(double); - case 'g': return sizeof(__Pyx_pad_longdouble) - sizeof(long double); - case 'P': case 'O': return sizeof(__Pyx_pad_void_p) - sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -static char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) { - switch (ch) { - case 'c': - return 'H'; - case 'b': case 'h': case 'i': - case 'l': case 'q': case 's': case 'p': - return 'I'; - case '?': case 'B': case 'H': case 'I': case 'L': case 'Q': - return 'U'; - case 'f': case 'd': case 'g': - return (is_complex ? 'C' : 'R'); - case 'O': - return 'O'; - case 'P': - return 'P'; - default: { - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } - } -} -static void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) { - if (ctx->head == NULL || ctx->head->field == &ctx->root) { - const char* expected; - const char* quote; - if (ctx->head == NULL) { - expected = "end"; - quote = ""; - } else { - expected = ctx->head->field->type->name; - quote = "'"; - } - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch, expected %s%s%s but got %s", - quote, expected, quote, - __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex)); - } else { - __Pyx_StructField* field = ctx->head->field; - __Pyx_StructField* parent = (ctx->head - 1)->field; - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'", - field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex), - parent->type->name, field->name); - } -} -static int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) { - char group; - size_t size, offset, arraysize = 1; - if (ctx->enc_type == 0) return 0; - if (ctx->head->field->type->arraysize[0]) { - int i, ndim = 0; - if (ctx->enc_type == 's' || ctx->enc_type == 'p') { - ctx->is_valid_array = ctx->head->field->type->ndim == 1; - ndim = 1; - if (ctx->enc_count != ctx->head->field->type->arraysize[0]) { - PyErr_Format(PyExc_ValueError, - "Expected a dimension of size %zu, got %zu", - ctx->head->field->type->arraysize[0], ctx->enc_count); - return -1; - } - } - if (!ctx->is_valid_array) { - PyErr_Format(PyExc_ValueError, "Expected %d dimensions, got %d", - ctx->head->field->type->ndim, ndim); - return -1; - } - for (i = 0; i < ctx->head->field->type->ndim; i++) { - arraysize *= ctx->head->field->type->arraysize[i]; - } - ctx->is_valid_array = 0; - ctx->enc_count = 1; - } - group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex); - do { - __Pyx_StructField* field = ctx->head->field; - __Pyx_TypeInfo* type = field->type; - if (ctx->enc_packmode == '@' || ctx->enc_packmode == '^') { - size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex); - } else { - size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex); - } - if (ctx->enc_packmode == '@') { - size_t align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex); - size_t align_mod_offset; - if (align_at == 0) return -1; - align_mod_offset = ctx->fmt_offset % align_at; - if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset; - if (ctx->struct_alignment == 0) - ctx->struct_alignment = __Pyx_BufFmt_TypeCharToPadding(ctx->enc_type, - ctx->is_complex); - } - if (type->size != size || type->typegroup != group) { - if (type->typegroup == 'C' && type->fields != NULL) { - size_t parent_offset = ctx->head->parent_offset + field->offset; - ++ctx->head; - ctx->head->field = type->fields; - ctx->head->parent_offset = parent_offset; - continue; - } - if ((type->typegroup == 'H' || group == 'H') && type->size == size) { - } else { - __Pyx_BufFmt_RaiseExpected(ctx); - return -1; - } - } - offset = ctx->head->parent_offset + field->offset; - if (ctx->fmt_offset != offset) { - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch; next field is at offset %" CYTHON_FORMAT_SSIZE_T "d but %" CYTHON_FORMAT_SSIZE_T "d expected", - (Py_ssize_t)ctx->fmt_offset, (Py_ssize_t)offset); - return -1; - } - ctx->fmt_offset += size; - if (arraysize) - ctx->fmt_offset += (arraysize - 1) * size; - --ctx->enc_count; - while (1) { - if (field == &ctx->root) { - ctx->head = NULL; - if (ctx->enc_count != 0) { - __Pyx_BufFmt_RaiseExpected(ctx); - return -1; - } - break; - } - ctx->head->field = ++field; - if (field->type == NULL) { - --ctx->head; - field = ctx->head->field; - continue; - } else if (field->type->typegroup == 'S') { - size_t parent_offset = ctx->head->parent_offset + field->offset; - if (field->type->fields->type == NULL) continue; - field = field->type->fields; - ++ctx->head; - ctx->head->field = field; - ctx->head->parent_offset = parent_offset; - break; - } else { - break; - } - } - } while (ctx->enc_count); - ctx->enc_type = 0; - ctx->is_complex = 0; - return 0; -} -static PyObject * -__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp) -{ - const char *ts = *tsp; - int i = 0, number, ndim; - ++ts; - if (ctx->new_count != 1) { - PyErr_SetString(PyExc_ValueError, - "Cannot handle repeated arrays in format string"); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ndim = ctx->head->field->type->ndim; - while (*ts && *ts != ')') { - switch (*ts) { - case ' ': case '\f': case '\r': case '\n': case '\t': case '\v': continue; - default: break; - } - number = __Pyx_BufFmt_ExpectNumber(&ts); - if (number == -1) return NULL; - if (i < ndim && (size_t) number != ctx->head->field->type->arraysize[i]) - return PyErr_Format(PyExc_ValueError, - "Expected a dimension of size %zu, got %d", - ctx->head->field->type->arraysize[i], number); - if (*ts != ',' && *ts != ')') - return PyErr_Format(PyExc_ValueError, - "Expected a comma in format string, got '%c'", *ts); - if (*ts == ',') ts++; - i++; - } - if (i != ndim) - return PyErr_Format(PyExc_ValueError, "Expected %d dimension(s), got %d", - ctx->head->field->type->ndim, i); - if (!*ts) { - PyErr_SetString(PyExc_ValueError, - "Unexpected end of format string, expected ')'"); - return NULL; - } - ctx->is_valid_array = 1; - ctx->new_count = 1; - *tsp = ++ts; - return Py_None; -} -static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) { - int got_Z = 0; - while (1) { - switch(*ts) { - case 0: - if (ctx->enc_type != 0 && ctx->head == NULL) { - __Pyx_BufFmt_RaiseExpected(ctx); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - if (ctx->head != NULL) { - __Pyx_BufFmt_RaiseExpected(ctx); - return NULL; - } - return ts; - case ' ': - case '\r': - case '\n': - ++ts; - break; - case '<': - if (!__Pyx_Is_Little_Endian()) { - PyErr_SetString(PyExc_ValueError, "Little-endian buffer not supported on big-endian compiler"); - return NULL; - } - ctx->new_packmode = '='; - ++ts; - break; - case '>': - case '!': - if (__Pyx_Is_Little_Endian()) { - PyErr_SetString(PyExc_ValueError, "Big-endian buffer not supported on little-endian compiler"); - return NULL; - } - ctx->new_packmode = '='; - ++ts; - break; - case '=': - case '@': - case '^': - ctx->new_packmode = *ts++; - break; - case 'T': - { - const char* ts_after_sub; - size_t i, struct_count = ctx->new_count; - size_t struct_alignment = ctx->struct_alignment; - ctx->new_count = 1; - ++ts; - if (*ts != '{') { - PyErr_SetString(PyExc_ValueError, "Buffer acquisition: Expected '{' after 'T'"); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_type = 0; - ctx->enc_count = 0; - ctx->struct_alignment = 0; - ++ts; - ts_after_sub = ts; - for (i = 0; i != struct_count; ++i) { - ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts); - if (!ts_after_sub) return NULL; - } - ts = ts_after_sub; - if (struct_alignment) ctx->struct_alignment = struct_alignment; - } - break; - case '}': - { - size_t alignment = ctx->struct_alignment; - ++ts; - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_type = 0; - if (alignment && ctx->fmt_offset % alignment) { - ctx->fmt_offset += alignment - (ctx->fmt_offset % alignment); - } - } - return ts; - case 'x': - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->fmt_offset += ctx->new_count; - ctx->new_count = 1; - ctx->enc_count = 0; - ctx->enc_type = 0; - ctx->enc_packmode = ctx->new_packmode; - ++ts; - break; - case 'Z': - got_Z = 1; - ++ts; - if (*ts != 'f' && *ts != 'd' && *ts != 'g') { - __Pyx_BufFmt_RaiseUnexpectedChar('Z'); - return NULL; - } - CYTHON_FALLTHROUGH; - case '?': case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I': - case 'l': case 'L': case 'q': case 'Q': - case 'f': case 'd': case 'g': - case 'O': case 'p': - if ((ctx->enc_type == *ts) && (got_Z == ctx->is_complex) && - (ctx->enc_packmode == ctx->new_packmode) && (!ctx->is_valid_array)) { - ctx->enc_count += ctx->new_count; - ctx->new_count = 1; - got_Z = 0; - ++ts; - break; - } - CYTHON_FALLTHROUGH; - case 's': - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_count = ctx->new_count; - ctx->enc_packmode = ctx->new_packmode; - ctx->enc_type = *ts; - ctx->is_complex = got_Z; - ++ts; - ctx->new_count = 1; - got_Z = 0; - break; - case ':': - ++ts; - while(*ts != ':') ++ts; - ++ts; - break; - case '(': - if (!__pyx_buffmt_parse_array(ctx, &ts)) return NULL; - break; - default: - { - int number = __Pyx_BufFmt_ExpectNumber(&ts); - if (number == -1) return NULL; - ctx->new_count = (size_t)number; - } - } - } -} - -/* TypeInfoCompare */ - static int -__pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b) -{ - int i; - if (!a || !b) - return 0; - if (a == b) - return 1; - if (a->size != b->size || a->typegroup != b->typegroup || - a->is_unsigned != b->is_unsigned || a->ndim != b->ndim) { - if (a->typegroup == 'H' || b->typegroup == 'H') { - return a->size == b->size; - } else { - return 0; - } - } - if (a->ndim) { - for (i = 0; i < a->ndim; i++) - if (a->arraysize[i] != b->arraysize[i]) - return 0; - } - if (a->typegroup == 'S') { - if (a->flags != b->flags) - return 0; - if (a->fields || b->fields) { - if (!(a->fields && b->fields)) - return 0; - for (i = 0; a->fields[i].type && b->fields[i].type; i++) { - __Pyx_StructField *field_a = a->fields + i; - __Pyx_StructField *field_b = b->fields + i; - if (field_a->offset != field_b->offset || - !__pyx_typeinfo_cmp(field_a->type, field_b->type)) - return 0; - } - return !a->fields[i].type && !b->fields[i].type; - } - } - return 1; -} - -/* MemviewSliceValidateAndInit */ - static int -__pyx_check_strides(Py_buffer *buf, int dim, int ndim, int spec) -{ - if (buf->shape[dim] <= 1) - return 1; - if (buf->strides) { - if (spec & __Pyx_MEMVIEW_CONTIG) { - if (spec & (__Pyx_MEMVIEW_PTR|__Pyx_MEMVIEW_FULL)) { - if (unlikely(buf->strides[dim] != sizeof(void *))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly contiguous " - "in dimension %d.", dim); - goto fail; - } - } else if (unlikely(buf->strides[dim] != buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - if (spec & __Pyx_MEMVIEW_FOLLOW) { - Py_ssize_t stride = buf->strides[dim]; - if (stride < 0) - stride = -stride; - if (unlikely(stride < buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - } else { - if (unlikely(spec & __Pyx_MEMVIEW_CONTIG && dim != ndim - 1)) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not contiguous in " - "dimension %d", dim); - goto fail; - } else if (unlikely(spec & (__Pyx_MEMVIEW_PTR))) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not indirect in " - "dimension %d", dim); - goto fail; - } else if (unlikely(buf->suboffsets)) { - PyErr_SetString(PyExc_ValueError, - "Buffer exposes suboffsets but no strides"); - goto fail; - } - } - return 1; -fail: - return 0; -} -static int -__pyx_check_suboffsets(Py_buffer *buf, int dim, CYTHON_UNUSED int ndim, int spec) -{ - if (spec & __Pyx_MEMVIEW_DIRECT) { - if (unlikely(buf->suboffsets && buf->suboffsets[dim] >= 0)) { - PyErr_Format(PyExc_ValueError, - "Buffer not compatible with direct access " - "in dimension %d.", dim); - goto fail; - } - } - if (spec & __Pyx_MEMVIEW_PTR) { - if (unlikely(!buf->suboffsets || (buf->suboffsets[dim] < 0))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly accessible " - "in dimension %d.", dim); - goto fail; - } - } - return 1; -fail: - return 0; -} -static int -__pyx_verify_contig(Py_buffer *buf, int ndim, int c_or_f_flag) -{ - int i; - if (c_or_f_flag & __Pyx_IS_F_CONTIG) { - Py_ssize_t stride = 1; - for (i = 0; i < ndim; i++) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not fortran contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } else if (c_or_f_flag & __Pyx_IS_C_CONTIG) { - Py_ssize_t stride = 1; - for (i = ndim - 1; i >- 1; i--) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not C contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } - return 1; -fail: - return 0; -} -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj) -{ - struct __pyx_memoryview_obj *memview, *new_memview; - __Pyx_RefNannyDeclarations - Py_buffer *buf; - int i, spec = 0, retval = -1; - __Pyx_BufFmt_Context ctx; - int from_memoryview = __pyx_memoryview_check(original_obj); - __Pyx_RefNannySetupContext("ValidateAndInit_memviewslice", 0); - if (from_memoryview && __pyx_typeinfo_cmp(dtype, ((struct __pyx_memoryview_obj *) - original_obj)->typeinfo)) { - memview = (struct __pyx_memoryview_obj *) original_obj; - new_memview = NULL; - } else { - memview = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - original_obj, buf_flags, 0, dtype); - new_memview = memview; - if (unlikely(!memview)) - goto fail; - } - buf = &memview->view; - if (unlikely(buf->ndim != ndim)) { - PyErr_Format(PyExc_ValueError, - "Buffer has wrong number of dimensions (expected %d, got %d)", - ndim, buf->ndim); - goto fail; - } - if (new_memview) { - __Pyx_BufFmt_Init(&ctx, stack, dtype); - if (unlikely(!__Pyx_BufFmt_CheckString(&ctx, buf->format))) goto fail; - } - if (unlikely((unsigned) buf->itemsize != dtype->size)) { - PyErr_Format(PyExc_ValueError, - "Item size of buffer (%" CYTHON_FORMAT_SSIZE_T "u byte%s) " - "does not match size of '%s' (%" CYTHON_FORMAT_SSIZE_T "u byte%s)", - buf->itemsize, - (buf->itemsize > 1) ? "s" : "", - dtype->name, - dtype->size, - (dtype->size > 1) ? "s" : ""); - goto fail; - } - if (buf->len > 0) { - for (i = 0; i < ndim; i++) { - spec = axes_specs[i]; - if (unlikely(!__pyx_check_strides(buf, i, ndim, spec))) - goto fail; - if (unlikely(!__pyx_check_suboffsets(buf, i, ndim, spec))) - goto fail; - } - if (unlikely(buf->strides && !__pyx_verify_contig(buf, ndim, c_or_f_flag))) - goto fail; - } - if (unlikely(__Pyx_init_memviewslice(memview, ndim, memviewslice, - new_memview != NULL) == -1)) { - goto fail; - } - retval = 0; - goto no_fail; -fail: - Py_XDECREF(new_memview); - retval = -1; -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3, - &__Pyx_TypeInfo_int, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3, - &__Pyx_TypeInfo_float, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 1, - &__Pyx_TypeInfo_int, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* CIntToPy */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) { - const int neg_one = (int) ((int) 0 - (int) 1), const_zero = (int) 0; - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(int) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(int) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(int) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(int), - little, !is_unsigned); - } -} - -/* CIntFromPyVerify */ - #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) -#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) -#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ - {\ - func_type value = func_value;\ - if (sizeof(target_type) < sizeof(func_type)) {\ - if (unlikely(value != (func_type) (target_type) value)) {\ - func_type zero = 0;\ - if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ - return (target_type) -1;\ - if (is_unsigned && unlikely(value < zero))\ - goto raise_neg_overflow;\ - else\ - goto raise_overflow;\ - }\ - }\ - return (target_type) value;\ - } - -/* CIntToPy */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { - const long neg_one = (long) ((long) 0 - (long) 1), const_zero = (long) 0; - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(long) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(long) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(long) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(long), - little, !is_unsigned); - } -} - -/* MemviewSliceCopyTemplate */ - static __Pyx_memviewslice -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object) -{ - __Pyx_RefNannyDeclarations - int i; - __Pyx_memviewslice new_mvs = { 0, 0, { 0 }, { 0 }, { 0 } }; - struct __pyx_memoryview_obj *from_memview = from_mvs->memview; - Py_buffer *buf = &from_memview->view; - PyObject *shape_tuple = NULL; - PyObject *temp_int = NULL; - struct __pyx_array_obj *array_obj = NULL; - struct __pyx_memoryview_obj *memview_obj = NULL; - __Pyx_RefNannySetupContext("__pyx_memoryview_copy_new_contig", 0); - for (i = 0; i < ndim; i++) { - if (unlikely(from_mvs->suboffsets[i] >= 0)) { - PyErr_Format(PyExc_ValueError, "Cannot copy memoryview slice with " - "indirect dimensions (axis %d)", i); - goto fail; - } - } - shape_tuple = PyTuple_New(ndim); - if (unlikely(!shape_tuple)) { - goto fail; - } - __Pyx_GOTREF(shape_tuple); - for(i = 0; i < ndim; i++) { - temp_int = PyInt_FromSsize_t(from_mvs->shape[i]); - if(unlikely(!temp_int)) { - goto fail; - } else { - PyTuple_SET_ITEM(shape_tuple, i, temp_int); - temp_int = NULL; - } - } - array_obj = __pyx_array_new(shape_tuple, sizeof_dtype, buf->format, (char *) mode, NULL); - if (unlikely(!array_obj)) { - goto fail; - } - __Pyx_GOTREF(array_obj); - memview_obj = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - (PyObject *) array_obj, contig_flag, - dtype_is_object, - from_mvs->memview->typeinfo); - if (unlikely(!memview_obj)) - goto fail; - if (unlikely(__Pyx_init_memviewslice(memview_obj, ndim, &new_mvs, 1) < 0)) - goto fail; - if (unlikely(__pyx_memoryview_copy_contents(*from_mvs, new_mvs, ndim, ndim, - dtype_is_object) < 0)) - goto fail; - goto no_fail; -fail: - __Pyx_XDECREF(new_mvs.memview); - new_mvs.memview = NULL; - new_mvs.data = NULL; -no_fail: - __Pyx_XDECREF(shape_tuple); - __Pyx_XDECREF(temp_int); - __Pyx_XDECREF(array_obj); - __Pyx_RefNannyFinishContext(); - return new_mvs; -} - -/* CIntFromPy */ - static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { - const int neg_one = (int) ((int) 0 - (int) 1), const_zero = (int) 0; - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(int) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (int) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0]) - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) { - return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) { - return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) { - return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(int) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0]) - case -2: - if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - } -#endif - if (sizeof(int) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - int val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (int) -1; - } - } else { - int val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (int) -1; - val = __Pyx_PyInt_As_int(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to int"); - return (int) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to int"); - return (int) -1; -} - -/* CIntFromPy */ - static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { - const long neg_one = (long) ((long) 0 - (long) 1), const_zero = (long) 0; - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(long) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (long) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0]) - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) { - return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) { - return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) { - return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(long) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0]) - case -2: - if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - } -#endif - if (sizeof(long) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - long val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (long) -1; - } - } else { - long val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (long) -1; - val = __Pyx_PyInt_As_long(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to long"); - return (long) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long) -1; -} - -/* CIntFromPy */ - static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *x) { - const char neg_one = (char) ((char) 0 - (char) 1), const_zero = (char) 0; - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(char) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(char, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (char) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (char) 0; - case 1: __PYX_VERIFY_RETURN_INT(char, digit, digits[0]) - case 2: - if (8 * sizeof(char) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 2 * PyLong_SHIFT) { - return (char) (((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(char) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 3 * PyLong_SHIFT) { - return (char) (((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(char) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 4 * PyLong_SHIFT) { - return (char) (((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (char) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(char) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(char, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(char) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(char, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (char) 0; - case -1: __PYX_VERIFY_RETURN_INT(char, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(char, digit, +digits[0]) - case -2: - if (8 * sizeof(char) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(char) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - return (char) ((((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(char) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - return (char) ((((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 4 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(char) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 4 * PyLong_SHIFT) { - return (char) ((((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - } -#endif - if (sizeof(char) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(char, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(char) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(char, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - char val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (char) -1; - } - } else { - char val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (char) -1; - val = __Pyx_PyInt_As_char(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to char"); - return (char) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to char"); - return (char) -1; -} - -/* CheckBinaryVersion */ - static int __Pyx_check_binary_version(void) { - char ctversion[4], rtversion[4]; - PyOS_snprintf(ctversion, 4, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION); - PyOS_snprintf(rtversion, 4, "%s", Py_GetVersion()); - if (ctversion[0] != rtversion[0] || ctversion[2] != rtversion[2]) { - char message[200]; - PyOS_snprintf(message, sizeof(message), - "compiletime version %s of module '%.100s' " - "does not match runtime version %s", - ctversion, __Pyx_MODULE_NAME, rtversion); - return PyErr_WarnEx(NULL, message, 1); - } - return 0; -} - -/* InitStrings */ - static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { - while (t->p) { - #if PY_MAJOR_VERSION < 3 - if (t->is_unicode) { - *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); - } else if (t->intern) { - *t->p = PyString_InternFromString(t->s); - } else { - *t->p = PyString_FromStringAndSize(t->s, t->n - 1); - } - #else - if (t->is_unicode | t->is_str) { - if (t->intern) { - *t->p = PyUnicode_InternFromString(t->s); - } else if (t->encoding) { - *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); - } else { - *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); - } - } else { - *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); - } - #endif - if (!*t->p) - return -1; - if (PyObject_Hash(*t->p) == -1) - return -1; - ++t; - } - return 0; -} - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { - return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); -} -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { - Py_ssize_t ignore; - return __Pyx_PyObject_AsStringAndSize(o, &ignore); -} -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -#if !CYTHON_PEP393_ENABLED -static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - char* defenc_c; - PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); - if (!defenc) return NULL; - defenc_c = PyBytes_AS_STRING(defenc); -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - { - char* end = defenc_c + PyBytes_GET_SIZE(defenc); - char* c; - for (c = defenc_c; c < end; c++) { - if ((unsigned char) (*c) >= 128) { - PyUnicode_AsASCIIString(o); - return NULL; - } - } - } -#endif - *length = PyBytes_GET_SIZE(defenc); - return defenc_c; -} -#else -static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - if (likely(PyUnicode_IS_ASCII(o))) { - *length = PyUnicode_GET_LENGTH(o); - return PyUnicode_AsUTF8(o); - } else { - PyUnicode_AsASCIIString(o); - return NULL; - } -#else - return PyUnicode_AsUTF8AndSize(o, length); -#endif -} -#endif -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT - if ( -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - __Pyx_sys_getdefaultencoding_not_ascii && -#endif - PyUnicode_Check(o)) { - return __Pyx_PyUnicode_AsStringAndSize(o, length); - } else -#endif -#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) - if (PyByteArray_Check(o)) { - *length = PyByteArray_GET_SIZE(o); - return PyByteArray_AS_STRING(o); - } else -#endif - { - char* result; - int r = PyBytes_AsStringAndSize(o, &result, length); - if (unlikely(r < 0)) { - return NULL; - } else { - return result; - } - } -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - int is_true = x == Py_True; - if (is_true | (x == Py_False) | (x == Py_None)) return is_true; - else return PyObject_IsTrue(x); -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { - int retval; - if (unlikely(!x)) return -1; - retval = __Pyx_PyObject_IsTrue(x); - Py_DECREF(x); - return retval; -} -static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { -#if PY_MAJOR_VERSION >= 3 - if (PyLong_Check(result)) { - if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, - "__int__ returned non-int (type %.200s). " - "The ability to return an instance of a strict subclass of int " - "is deprecated, and may be removed in a future version of Python.", - Py_TYPE(result)->tp_name)) { - Py_DECREF(result); - return NULL; - } - return result; - } -#endif - PyErr_Format(PyExc_TypeError, - "__%.4s__ returned non-%.4s (type %.200s)", - type_name, type_name, Py_TYPE(result)->tp_name); - Py_DECREF(result); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { -#if CYTHON_USE_TYPE_SLOTS - PyNumberMethods *m; -#endif - const char *name = NULL; - PyObject *res = NULL; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x) || PyLong_Check(x))) -#else - if (likely(PyLong_Check(x))) -#endif - return __Pyx_NewRef(x); -#if CYTHON_USE_TYPE_SLOTS - m = Py_TYPE(x)->tp_as_number; - #if PY_MAJOR_VERSION < 3 - if (m && m->nb_int) { - name = "int"; - res = m->nb_int(x); - } - else if (m && m->nb_long) { - name = "long"; - res = m->nb_long(x); - } - #else - if (likely(m && m->nb_int)) { - name = "int"; - res = m->nb_int(x); - } - #endif -#else - if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { - res = PyNumber_Int(x); - } -#endif - if (likely(res)) { -#if PY_MAJOR_VERSION < 3 - if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { -#else - if (unlikely(!PyLong_CheckExact(res))) { -#endif - return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject *x; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(b))) { - if (sizeof(Py_ssize_t) >= sizeof(long)) - return PyInt_AS_LONG(b); - else - return PyInt_AsSsize_t(b); - } -#endif - if (likely(PyLong_CheckExact(b))) { - #if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)b)->ob_digit; - const Py_ssize_t size = Py_SIZE(b); - if (likely(__Pyx_sst_abs(size) <= 1)) { - ival = likely(size) ? digits[0] : 0; - if (size == -1) ival = -ival; - return ival; - } else { - switch (size) { - case 2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - } - } - #endif - return PyLong_AsSsize_t(b); - } - x = PyNumber_Index(b); - if (!x) return -1; - ival = PyInt_AsSsize_t(x); - Py_DECREF(x); - return ival; -} -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { - return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); -} -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { - return PyInt_FromSize_t(ival); -} - - -#endif /* Py_PYTHON_H */ diff --git a/spaces/Kajise/GPT4ALL-Falcon/README.md b/spaces/Kajise/GPT4ALL-Falcon/README.md deleted file mode 100644 index 94cf0f6a9f1f5c2715a58a51fb55a6b1470fc0ed..0000000000000000000000000000000000000000 --- a/spaces/Kajise/GPT4ALL-Falcon/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: GPT4ALL Falcon -emoji: 🐢 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -license: agpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kangarroar/ApplioRVC-Inference/demucs/pretrained.py b/spaces/Kangarroar/ApplioRVC-Inference/demucs/pretrained.py deleted file mode 100644 index 6aac5db100cc7a9084af96d2cd083f0c8fac473c..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/demucs/pretrained.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# author: adefossez - -import logging - -from diffq import DiffQuantizer -import torch.hub - -from .model import Demucs -from .tasnet import ConvTasNet -from .utils import set_state - -logger = logging.getLogger(__name__) -ROOT = "https://dl.fbaipublicfiles.com/demucs/v3.0/" - -PRETRAINED_MODELS = { - 'demucs': 'e07c671f', - 'demucs48_hq': '28a1282c', - 'demucs_extra': '3646af93', - 'demucs_quantized': '07afea75', - 'tasnet': 'beb46fac', - 'tasnet_extra': 'df3777b2', - 'demucs_unittest': '09ebc15f', -} - -SOURCES = ["drums", "bass", "other", "vocals"] - - -def get_url(name): - sig = PRETRAINED_MODELS[name] - return ROOT + name + "-" + sig[:8] + ".th" - - -def is_pretrained(name): - return name in PRETRAINED_MODELS - - -def load_pretrained(name): - if name == "demucs": - return demucs(pretrained=True) - elif name == "demucs48_hq": - return demucs(pretrained=True, hq=True, channels=48) - elif name == "demucs_extra": - return demucs(pretrained=True, extra=True) - elif name == "demucs_quantized": - return demucs(pretrained=True, quantized=True) - elif name == "demucs_unittest": - return demucs_unittest(pretrained=True) - elif name == "tasnet": - return tasnet(pretrained=True) - elif name == "tasnet_extra": - return tasnet(pretrained=True, extra=True) - else: - raise ValueError(f"Invalid pretrained name {name}") - - -def _load_state(name, model, quantizer=None): - url = get_url(name) - state = torch.hub.load_state_dict_from_url(url, map_location='cpu', check_hash=True) - set_state(model, quantizer, state) - if quantizer: - quantizer.detach() - - -def demucs_unittest(pretrained=True): - model = Demucs(channels=4, sources=SOURCES) - if pretrained: - _load_state('demucs_unittest', model) - return model - - -def demucs(pretrained=True, extra=False, quantized=False, hq=False, channels=64): - if not pretrained and (extra or quantized or hq): - raise ValueError("if extra or quantized is True, pretrained must be True.") - model = Demucs(sources=SOURCES, channels=channels) - if pretrained: - name = 'demucs' - if channels != 64: - name += str(channels) - quantizer = None - if sum([extra, quantized, hq]) > 1: - raise ValueError("Only one of extra, quantized, hq, can be True.") - if quantized: - quantizer = DiffQuantizer(model, group_size=8, min_size=1) - name += '_quantized' - if extra: - name += '_extra' - if hq: - name += '_hq' - _load_state(name, model, quantizer) - return model - - -def tasnet(pretrained=True, extra=False): - if not pretrained and extra: - raise ValueError("if extra is True, pretrained must be True.") - model = ConvTasNet(X=10, sources=SOURCES) - if pretrained: - name = 'tasnet' - if extra: - name = 'tasnet_extra' - _load_state(name, model) - return model diff --git a/spaces/Kellyasrfuhioj/stydbdcg/app.py b/spaces/Kellyasrfuhioj/stydbdcg/app.py deleted file mode 100644 index f1d4beb0a8f3cee27903f527b6bf8daa485a75a0..0000000000000000000000000000000000000000 --- a/spaces/Kellyasrfuhioj/stydbdcg/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("huggingface/gpt2").launch() \ No newline at end of file diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/vocoder/models/fatchord_version.py b/spaces/Kevin676/Real-Time-Voice-Cloning/vocoder/models/fatchord_version.py deleted file mode 100644 index 70ef1e3f6b99f32cc4fa95f64acfa58268d71ad7..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Real-Time-Voice-Cloning/vocoder/models/fatchord_version.py +++ /dev/null @@ -1,434 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from vocoder.distribution import sample_from_discretized_mix_logistic -from vocoder.display import * -from vocoder.audio import * - - -class ResBlock(nn.Module): - def __init__(self, dims): - super().__init__() - self.conv1 = nn.Conv1d(dims, dims, kernel_size=1, bias=False) - self.conv2 = nn.Conv1d(dims, dims, kernel_size=1, bias=False) - self.batch_norm1 = nn.BatchNorm1d(dims) - self.batch_norm2 = nn.BatchNorm1d(dims) - - def forward(self, x): - residual = x - x = self.conv1(x) - x = self.batch_norm1(x) - x = F.relu(x) - x = self.conv2(x) - x = self.batch_norm2(x) - return x + residual - - -class MelResNet(nn.Module): - def __init__(self, res_blocks, in_dims, compute_dims, res_out_dims, pad): - super().__init__() - k_size = pad * 2 + 1 - self.conv_in = nn.Conv1d(in_dims, compute_dims, kernel_size=k_size, bias=False) - self.batch_norm = nn.BatchNorm1d(compute_dims) - self.layers = nn.ModuleList() - for i in range(res_blocks): - self.layers.append(ResBlock(compute_dims)) - self.conv_out = nn.Conv1d(compute_dims, res_out_dims, kernel_size=1) - - def forward(self, x): - x = self.conv_in(x) - x = self.batch_norm(x) - x = F.relu(x) - for f in self.layers: x = f(x) - x = self.conv_out(x) - return x - - -class Stretch2d(nn.Module): - def __init__(self, x_scale, y_scale): - super().__init__() - self.x_scale = x_scale - self.y_scale = y_scale - - def forward(self, x): - b, c, h, w = x.size() - x = x.unsqueeze(-1).unsqueeze(3) - x = x.repeat(1, 1, 1, self.y_scale, 1, self.x_scale) - return x.view(b, c, h * self.y_scale, w * self.x_scale) - - -class UpsampleNetwork(nn.Module): - def __init__(self, feat_dims, upsample_scales, compute_dims, - res_blocks, res_out_dims, pad): - super().__init__() - total_scale = np.cumproduct(upsample_scales)[-1] - self.indent = pad * total_scale - self.resnet = MelResNet(res_blocks, feat_dims, compute_dims, res_out_dims, pad) - self.resnet_stretch = Stretch2d(total_scale, 1) - self.up_layers = nn.ModuleList() - for scale in upsample_scales: - k_size = (1, scale * 2 + 1) - padding = (0, scale) - stretch = Stretch2d(scale, 1) - conv = nn.Conv2d(1, 1, kernel_size=k_size, padding=padding, bias=False) - conv.weight.data.fill_(1. / k_size[1]) - self.up_layers.append(stretch) - self.up_layers.append(conv) - - def forward(self, m): - aux = self.resnet(m).unsqueeze(1) - aux = self.resnet_stretch(aux) - aux = aux.squeeze(1) - m = m.unsqueeze(1) - for f in self.up_layers: m = f(m) - m = m.squeeze(1)[:, :, self.indent:-self.indent] - return m.transpose(1, 2), aux.transpose(1, 2) - - -class WaveRNN(nn.Module): - def __init__(self, rnn_dims, fc_dims, bits, pad, upsample_factors, - feat_dims, compute_dims, res_out_dims, res_blocks, - hop_length, sample_rate, mode='RAW'): - super().__init__() - self.mode = mode - self.pad = pad - if self.mode == 'RAW' : - self.n_classes = 2 ** bits - elif self.mode == 'MOL' : - self.n_classes = 30 - else : - RuntimeError("Unknown model mode value - ", self.mode) - - self.rnn_dims = rnn_dims - self.aux_dims = res_out_dims // 4 - self.hop_length = hop_length - self.sample_rate = sample_rate - - self.upsample = UpsampleNetwork(feat_dims, upsample_factors, compute_dims, res_blocks, res_out_dims, pad) - self.I = nn.Linear(feat_dims + self.aux_dims + 1, rnn_dims) - self.rnn1 = nn.GRU(rnn_dims, rnn_dims, batch_first=True) - self.rnn2 = nn.GRU(rnn_dims + self.aux_dims, rnn_dims, batch_first=True) - self.fc1 = nn.Linear(rnn_dims + self.aux_dims, fc_dims) - self.fc2 = nn.Linear(fc_dims + self.aux_dims, fc_dims) - self.fc3 = nn.Linear(fc_dims, self.n_classes) - - self.step = nn.Parameter(torch.zeros(1).long(), requires_grad=False) - self.num_params() - - def forward(self, x, mels): - self.step += 1 - bsize = x.size(0) - if torch.cuda.is_available(): - h1 = torch.zeros(1, bsize, self.rnn_dims).cuda() - h2 = torch.zeros(1, bsize, self.rnn_dims).cuda() - else: - h1 = torch.zeros(1, bsize, self.rnn_dims).cpu() - h2 = torch.zeros(1, bsize, self.rnn_dims).cpu() - mels, aux = self.upsample(mels) - - aux_idx = [self.aux_dims * i for i in range(5)] - a1 = aux[:, :, aux_idx[0]:aux_idx[1]] - a2 = aux[:, :, aux_idx[1]:aux_idx[2]] - a3 = aux[:, :, aux_idx[2]:aux_idx[3]] - a4 = aux[:, :, aux_idx[3]:aux_idx[4]] - - x = torch.cat([x.unsqueeze(-1), mels, a1], dim=2) - x = self.I(x) - res = x - x, _ = self.rnn1(x, h1) - - x = x + res - res = x - x = torch.cat([x, a2], dim=2) - x, _ = self.rnn2(x, h2) - - x = x + res - x = torch.cat([x, a3], dim=2) - x = F.relu(self.fc1(x)) - - x = torch.cat([x, a4], dim=2) - x = F.relu(self.fc2(x)) - return self.fc3(x) - - def generate(self, mels, batched, target, overlap, mu_law, progress_callback=None): - mu_law = mu_law if self.mode == 'RAW' else False - progress_callback = progress_callback or self.gen_display - - self.eval() - output = [] - start = time.time() - rnn1 = self.get_gru_cell(self.rnn1) - rnn2 = self.get_gru_cell(self.rnn2) - - with torch.no_grad(): - if torch.cuda.is_available(): - mels = mels.cuda() - else: - mels = mels.cpu() - wave_len = (mels.size(-1) - 1) * self.hop_length - mels = self.pad_tensor(mels.transpose(1, 2), pad=self.pad, side='both') - mels, aux = self.upsample(mels.transpose(1, 2)) - - if batched: - mels = self.fold_with_overlap(mels, target, overlap) - aux = self.fold_with_overlap(aux, target, overlap) - - b_size, seq_len, _ = mels.size() - - if torch.cuda.is_available(): - h1 = torch.zeros(b_size, self.rnn_dims).cuda() - h2 = torch.zeros(b_size, self.rnn_dims).cuda() - x = torch.zeros(b_size, 1).cuda() - else: - h1 = torch.zeros(b_size, self.rnn_dims).cpu() - h2 = torch.zeros(b_size, self.rnn_dims).cpu() - x = torch.zeros(b_size, 1).cpu() - - d = self.aux_dims - aux_split = [aux[:, :, d * i:d * (i + 1)] for i in range(4)] - - for i in range(seq_len): - - m_t = mels[:, i, :] - - a1_t, a2_t, a3_t, a4_t = (a[:, i, :] for a in aux_split) - - x = torch.cat([x, m_t, a1_t], dim=1) - x = self.I(x) - h1 = rnn1(x, h1) - - x = x + h1 - inp = torch.cat([x, a2_t], dim=1) - h2 = rnn2(inp, h2) - - x = x + h2 - x = torch.cat([x, a3_t], dim=1) - x = F.relu(self.fc1(x)) - - x = torch.cat([x, a4_t], dim=1) - x = F.relu(self.fc2(x)) - - logits = self.fc3(x) - - if self.mode == 'MOL': - sample = sample_from_discretized_mix_logistic(logits.unsqueeze(0).transpose(1, 2)) - output.append(sample.view(-1)) - if torch.cuda.is_available(): - # x = torch.FloatTensor([[sample]]).cuda() - x = sample.transpose(0, 1).cuda() - else: - x = sample.transpose(0, 1) - - elif self.mode == 'RAW' : - posterior = F.softmax(logits, dim=1) - distrib = torch.distributions.Categorical(posterior) - - sample = 2 * distrib.sample().float() / (self.n_classes - 1.) - 1. - output.append(sample) - x = sample.unsqueeze(-1) - else: - raise RuntimeError("Unknown model mode value - ", self.mode) - - if i % 100 == 0: - gen_rate = (i + 1) / (time.time() - start) * b_size / 1000 - progress_callback(i, seq_len, b_size, gen_rate) - - output = torch.stack(output).transpose(0, 1) - output = output.cpu().numpy() - output = output.astype(np.float64) - - if batched: - output = self.xfade_and_unfold(output, target, overlap) - else: - output = output[0] - - if mu_law: - output = decode_mu_law(output, self.n_classes, False) - if hp.apply_preemphasis: - output = de_emphasis(output) - - # Fade-out at the end to avoid signal cutting out suddenly - fade_out = np.linspace(1, 0, 20 * self.hop_length) - output = output[:wave_len] - output[-20 * self.hop_length:] *= fade_out - - self.train() - - return output - - - def gen_display(self, i, seq_len, b_size, gen_rate): - pbar = progbar(i, seq_len) - msg = f'| {pbar} {i*b_size}/{seq_len*b_size} | Batch Size: {b_size} | Gen Rate: {gen_rate:.1f}kHz | ' - stream(msg) - - def get_gru_cell(self, gru): - gru_cell = nn.GRUCell(gru.input_size, gru.hidden_size) - gru_cell.weight_hh.data = gru.weight_hh_l0.data - gru_cell.weight_ih.data = gru.weight_ih_l0.data - gru_cell.bias_hh.data = gru.bias_hh_l0.data - gru_cell.bias_ih.data = gru.bias_ih_l0.data - return gru_cell - - def pad_tensor(self, x, pad, side='both'): - # NB - this is just a quick method i need right now - # i.e., it won't generalise to other shapes/dims - b, t, c = x.size() - total = t + 2 * pad if side == 'both' else t + pad - if torch.cuda.is_available(): - padded = torch.zeros(b, total, c).cuda() - else: - padded = torch.zeros(b, total, c).cpu() - if side == 'before' or side == 'both': - padded[:, pad:pad + t, :] = x - elif side == 'after': - padded[:, :t, :] = x - return padded - - def fold_with_overlap(self, x, target, overlap): - - ''' Fold the tensor with overlap for quick batched inference. - Overlap will be used for crossfading in xfade_and_unfold() - - Args: - x (tensor) : Upsampled conditioning features. - shape=(1, timesteps, features) - target (int) : Target timesteps for each index of batch - overlap (int) : Timesteps for both xfade and rnn warmup - - Return: - (tensor) : shape=(num_folds, target + 2 * overlap, features) - - Details: - x = [[h1, h2, ... hn]] - - Where each h is a vector of conditioning features - - Eg: target=2, overlap=1 with x.size(1)=10 - - folded = [[h1, h2, h3, h4], - [h4, h5, h6, h7], - [h7, h8, h9, h10]] - ''' - - _, total_len, features = x.size() - - # Calculate variables needed - num_folds = (total_len - overlap) // (target + overlap) - extended_len = num_folds * (overlap + target) + overlap - remaining = total_len - extended_len - - # Pad if some time steps poking out - if remaining != 0: - num_folds += 1 - padding = target + 2 * overlap - remaining - x = self.pad_tensor(x, padding, side='after') - - if torch.cuda.is_available(): - folded = torch.zeros(num_folds, target + 2 * overlap, features).cuda() - else: - folded = torch.zeros(num_folds, target + 2 * overlap, features).cpu() - - # Get the values for the folded tensor - for i in range(num_folds): - start = i * (target + overlap) - end = start + target + 2 * overlap - folded[i] = x[:, start:end, :] - - return folded - - def xfade_and_unfold(self, y, target, overlap): - - ''' Applies a crossfade and unfolds into a 1d array. - - Args: - y (ndarry) : Batched sequences of audio samples - shape=(num_folds, target + 2 * overlap) - dtype=np.float64 - overlap (int) : Timesteps for both xfade and rnn warmup - - Return: - (ndarry) : audio samples in a 1d array - shape=(total_len) - dtype=np.float64 - - Details: - y = [[seq1], - [seq2], - [seq3]] - - Apply a gain envelope at both ends of the sequences - - y = [[seq1_in, seq1_target, seq1_out], - [seq2_in, seq2_target, seq2_out], - [seq3_in, seq3_target, seq3_out]] - - Stagger and add up the groups of samples: - - [seq1_in, seq1_target, (seq1_out + seq2_in), seq2_target, ...] - - ''' - - num_folds, length = y.shape - target = length - 2 * overlap - total_len = num_folds * (target + overlap) + overlap - - # Need some silence for the rnn warmup - silence_len = overlap // 2 - fade_len = overlap - silence_len - silence = np.zeros((silence_len), dtype=np.float64) - - # Equal power crossfade - t = np.linspace(-1, 1, fade_len, dtype=np.float64) - fade_in = np.sqrt(0.5 * (1 + t)) - fade_out = np.sqrt(0.5 * (1 - t)) - - # Concat the silence to the fades - fade_in = np.concatenate([silence, fade_in]) - fade_out = np.concatenate([fade_out, silence]) - - # Apply the gain to the overlap samples - y[:, :overlap] *= fade_in - y[:, -overlap:] *= fade_out - - unfolded = np.zeros((total_len), dtype=np.float64) - - # Loop to add up all the samples - for i in range(num_folds): - start = i * (target + overlap) - end = start + target + 2 * overlap - unfolded[start:end] += y[i] - - return unfolded - - def get_step(self) : - return self.step.data.item() - - def checkpoint(self, model_dir, optimizer) : - k_steps = self.get_step() // 1000 - self.save(model_dir.joinpath("checkpoint_%dk_steps.pt" % k_steps), optimizer) - - def log(self, path, msg) : - with open(path, 'a') as f: - print(msg, file=f) - - def load(self, path, optimizer) : - checkpoint = torch.load(path) - if "optimizer_state" in checkpoint: - self.load_state_dict(checkpoint["model_state"]) - optimizer.load_state_dict(checkpoint["optimizer_state"]) - else: - # Backwards compatibility - self.load_state_dict(checkpoint) - - def save(self, path, optimizer) : - torch.save({ - "model_state": self.state_dict(), - "optimizer_state": optimizer.state_dict(), - }, path) - - def num_params(self, print_out=True): - parameters = filter(lambda p: p.requires_grad, self.parameters()) - parameters = sum([np.prod(p.size()) for p in parameters]) / 1_000_000 - if print_out : - print('Trainable Parameters: %.3fM' % parameters) diff --git a/spaces/Kyan14/Mood_Based_Generative_Art/README.md b/spaces/Kyan14/Mood_Based_Generative_Art/README.md deleted file mode 100644 index 4f9a8f49a8af021bbbaa50b10f42845253012a5d..0000000000000000000000000000000000000000 --- a/spaces/Kyan14/Mood_Based_Generative_Art/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Mood Based Generative Art -emoji: 💻 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: cc ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KyanChen/BuildingExtraction/Utils/Utils.py b/spaces/KyanChen/BuildingExtraction/Utils/Utils.py deleted file mode 100644 index af272cc343ede0274a5f4ad7779efa6386597d92..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/BuildingExtraction/Utils/Utils.py +++ /dev/null @@ -1,619 +0,0 @@ -import yaml -import torch -import random -import numpy as np -import os -import sys -import matplotlib.pyplot as plt -from einops import repeat -import cv2 -import time -import torch.nn.functional as F - - -__all__ = ["decode_mask_to_onehot", - "encode_onehot_to_mask", - 'Logger', - 'get_coords_grid', - 'get_coords_grid_float', - 'draw_bboxes', - 'Infos', - 'inv_normalize_img', - 'make_numpy_img', - 'get_metrics' - ] - - -class Infos(object): - def __init__(self, phase, class_names=None): - assert phase in ['od'], "Error in Infos" - self.phase = phase - self.class_names = class_names - self.register() - self.pattern = 'train' - self.epoch_id = 0 - self.max_epoch = 0 - self.batch_id = 0 - self.batch_num = 0 - self.lr = 0 - self.fps_data_load = 0 - self.fps = 0 - self.val_metric = 0 - - # 'running_acc': {'loss': [], 'mIoU': [], 'OA': [], 'F1_score': []}, - # 'epoch_metrics': {'loss': 1e10, 'mIoU': 0, 'OA': 0, 'F1_score': 0}, - # 'best_val_metrics': {'epoch_id': 0, 'loss': 1e10, 'mIoU': 0, 'OA': 0, 'F1_score': 0}, - def set_epoch_training_time(self, data): - self.epoch_training_time = data - - def set_pattern(self, data): - self.pattern = data - def set_epoch_id(self, data): - self.epoch_id = data - def set_max_epoch(self, data): - self.max_epoch = data - def set_batch_id(self, data): - self.batch_id = data - def set_batch_num(self, data): - self.batch_num = data - def set_lr(self, data): - self.lr = data - def set_fps_data_load(self, data): - self.fps_data_load = data - def set_fps(self, data): - self.fps = data - def clear_cache(self): - self.register() - - def get_val_metric(self): - return self.val_metric - - def cal_metrics(self): - if self.phase == 'od': - coco_api_gt = COCO() - coco_api_gt.dataset['images'] = [] - coco_api_gt.dataset['annotations'] = [] - ann_id = 0 - for i, targets_per_image in enumerate(self.result_all['target_all']): - for j in range(targets_per_image.shape[0]): - coco_api_gt.dataset['images'].append({'id': i}) - coco_api_gt.dataset['annotations'].append({ - 'image_id': i, - "category_id": int(targets_per_image[j, 0]), - "bbox": np.hstack([targets_per_image[j, 1:3], targets_per_image[j, 3:5] - targets_per_image[j, 1:3]]), - "area": np.prod(targets_per_image[j, 3:5] - targets_per_image[j, 1:3]), - "id": ann_id, - "iscrowd": 0 - }) - ann_id += 1 - coco_api_gt.dataset['categories'] = [{"id": i, "supercategory": c, "name": c} for i, c in - enumerate(self.class_names)] - coco_api_gt.createIndex() - - coco_api_pred = COCO() - coco_api_pred.dataset['images'] = [] - coco_api_pred.dataset['annotations'] = [] - ann_id = 0 - for i, preds_per_image in enumerate(self.result_all['pred_all']): - for j in range(preds_per_image.shape[0]): - coco_api_pred.dataset['images'].append({'id': i}) - coco_api_pred.dataset['annotations'].append({ - 'image_id': i, - "category_id": int(preds_per_image[j, 0]), - 'score': preds_per_image[j, 1], - "bbox": np.hstack( - [preds_per_image[j, 2:4], preds_per_image[j, 4:6] - preds_per_image[j, 2:4]]), - "area": np.prod(preds_per_image[j, 4:6] - preds_per_image[j, 2:4]), - "id": ann_id, - "iscrowd": 0 - }) - ann_id += 1 - coco_api_pred.dataset['categories'] = [{"id": i, "supercategory": c, "name": c} for i, c in - enumerate(self.class_names)] - coco_api_pred.createIndex() - - coco_eval = COCOeval(coco_api_gt, coco_api_pred, "bbox") - coco_eval.params.imgIds = coco_api_gt.getImgIds() - coco_eval.evaluate() - coco_eval.accumulate() - self.metrics = coco_eval.summarize() - self.val_metric = self.metrics[1] - - def print_epoch_state_infos(self, logger): - infos_str = 'Pattern: %s Epoch [%d,%d], time: %d loss: %.4f' % \ - (self.pattern, self.epoch_id, self.max_epoch, self.epoch_training_time, np.mean(self.loss_all['loss'])) - logger.write(infos_str + '\n') - time_start = time.time() - self.cal_metrics() - time_end = time.time() - logger.write('Pattern: %s Epoch Eval_time: %d\n' % (self.pattern, (time_end - time_start))) - - if self.phase == 'od': - titleStr = 6 * ['Average Precision'] + 6 * ['Average Recall'] - typeStr = 6 * ['(AP)'] + 6 * ['(AR)'] - iouStr = 12 * ['0.50:0.95'] - iouStr[1] = '0.50' - iouStr[2] = '0.75' - areaRng = 3 * ['all'] + ['small', 'medium', 'large'] + 3 * ['all'] + ['small', 'medium', 'large'] - maxDets = 6 * [100] + [1, 10, 100] + 3 * [100] - for i in range(12): - infos_str = '{:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} ] = {:0.3f}\n' - logger.write(infos_str.format(titleStr[i], typeStr[i], iouStr[i], areaRng[i], maxDets[i], self.metrics[i])) - - - def save_epoch_state_infos(self, writer): - iter = self.epoch_id - keys = [ - 'AP_m_all_100', - 'AP_50_all_100', - 'AP_75_all_100', - 'AP_m_small_100', - 'AP_m_medium_100', - 'AP_m_large_100', - 'AR_m_all_1', - 'AR_m_all_10', - 'AR_m_all_100', - 'AR_m_small_100', - 'AR_m_medium_100', - 'AR_m_large_100', - ] - for i, key in enumerate(keys): - writer.add_scalar(f'%s/epoch/%s' % (self.pattern, key), self.metrics[i], iter) - - def print_batch_state_infos(self, logger): - infos_str = 'Pattern: %s [%d,%d][%d,%d], lr: %5f, fps_data_load: %.2f, fps: %.2f' % \ - (self.pattern, self.epoch_id, self.max_epoch, self.batch_id, - self.batch_num, self.lr, self.fps_data_load, self.fps) - # add loss - infos_str += ', loss: %.4f' % self.loss_all['loss'][-1] - logger.write(infos_str + '\n') - - def save_batch_state_infos(self, writer): - iter = self.epoch_id * self.batch_num + self.batch_id - writer.add_scalar('%s/lr' % self.pattern, self.lr, iter) - for key, value in self.loss_all.items(): - writer.add_scalar(f'%s/%s' % (self.pattern, key), value[-1], iter) - - def save_results(self, img_batch, prior_mean, prior_std, vis_dir, *args, **kwargs): - batch_size = img_batch.size(0) - k = np.clip(int(0.3 * batch_size), a_min=1, a_max=batch_size) - ids = np.random.choice(range(batch_size), k, replace=False) - for img_id in ids: - img = img_batch[img_id].detach().cpu() - pred = self.result_all['pred_all'][img_id - batch_size] - target = self.result_all['target_all'][img_id - batch_size] - - img = make_numpy_img(inv_normalize_img(img, prior_mean, prior_std)) - pred_draw = draw_bboxes(img, pred, self.class_names, (255, 0, 0)) - target_draw = draw_bboxes(img, target, self.class_names, (0, 255, 0)) - # target = make_numpy_img(encode_onehot_to_mask(target)) - # pred = make_numpy_img(pred_label[img_id]) - - vis = np.concatenate([img/255., pred_draw/255., target_draw/255.], axis=0) - vis = np.clip(vis, a_min=0, a_max=1) - file_name = os.path.join(vis_dir, self.pattern, f'{self.epoch_id}_{self.batch_id}_{img_id}.png') - plt.imsave(file_name, vis) - - def register(self): - self.is_registered_result = False - self.result_all = {} - - self.is_registered_loss = False - self.loss_all = {} - - def register_result(self, data: dict): - for key in data.keys(): - self.result_all[key] = [] - self.is_registered_result = True - - def append_result(self, data: dict): - if not self.is_registered_result: - self.register_result(data) - for key, value in data.items(): - self.result_all[key] += value - - def register_loss(self, data: dict): - for key in data.keys(): - self.loss_all[key] = [] - self.is_registered_loss = True - - def append_loss(self, data: dict): - if not self.is_registered_loss: - self.register_loss(data) - for key, value in data.items(): - self.loss_all[key].append(value.detach().cpu().numpy()) - - -# draw bboxes on image, bboxes with classID -def draw_bboxes(img, bboxes, color=(255, 0, 0), class_names=None, is_show_score=True): - ''' - Args: - img: - bboxes: [n, 5], class_idx, l, t, r, b - [n, 6], class_idx, score, l, t, r, b - Returns: - ''' - assert img is not None, "In draw_bboxes, img is None" - if torch.is_tensor(img): - img = img.cpu().numpy() - img = img.astype(np.uint8).copy() - - if torch.is_tensor(bboxes): - bboxes = bboxes.cpu().numpy() - for bbox in bboxes: - if class_names: - class_name = class_names[int(bbox[0])] - bbox_coordinate = bbox[1:] - if len(bbox) == 6: - score = bbox[1] - bbox_coordinate = bbox[2:] - bbox_coordinate = bbox_coordinate.astype(np.int) - if is_show_score: - cv2.rectangle(img, pt1=tuple(bbox_coordinate[0:2] - np.array([2, 15])), - pt2=tuple(bbox_coordinate[0:2] + np.array([15, 1])), color=(0, 0, 255), thickness=-1) - if len(bbox) == 6: - cv2.putText(img, text='%s:%.2f' % (class_name, score), - org=tuple(bbox_coordinate[0:2] - np.array([1, 7])), fontFace=cv2.FONT_HERSHEY_SIMPLEX, - fontScale=0.2, color=(255, 255, 255), thickness=1) - else: - cv2.putText(img, text='%s' % class_name, - org=tuple(bbox_coordinate[0:2] - np.array([1, 7])), fontFace=cv2.FONT_HERSHEY_SIMPLEX, - fontScale=0.2, color=(255, 255, 255), thickness=1) - cv2.rectangle(img, pt1=tuple(bbox_coordinate[0:2]), pt2=tuple(bbox_coordinate[2:4]), color=color, thickness=2) - return img - - -def get_coords_grid(h_end, w_end, h_start=0, w_start=0, h_steps=None, w_steps=None, is_normalize=False): - if h_steps is None: - h_steps = int(h_end - h_start) + 1 - if w_steps is None: - w_steps = int(w_end - w_start) + 1 - - y = torch.linspace(h_start, h_end, h_steps) - x = torch.linspace(w_start, w_end, w_steps) - if is_normalize: - y = y / h_end - x = x / w_end - coords = torch.meshgrid(y, x) - coords = torch.stack(coords[::-1], dim=0) - return coords - - -def get_coords_grid_float(ht, wd, scale, is_normalize=False): - y = torch.linspace(0, scale, ht + 2) - x = torch.linspace(0, scale, wd + 2) - if is_normalize: - y = y/scale - x = x/scale - coords = torch.meshgrid(y[1:-1], x[1:-1]) - coords = torch.stack(coords[::-1], dim=0) - return coords - - -def get_coords_vector_float(len, scale, is_normalize=False): - x = torch.linspace(0, scale, len+2) - if is_normalize: - x = x/scale - coords = torch.meshgrid(x[1:-1], torch.tensor([0.])) - coords = torch.stack(coords[::-1], dim=0) - return coords - - -class Logger(object): - def __init__(self, filename="Default.log", is_terminal_show=True): - self.is_terminal_show = is_terminal_show - if self.is_terminal_show: - self.terminal = sys.stdout - self.log = open(filename, "a") - - def write(self, message): - if self.is_terminal_show: - self.terminal.write(message) - self.log.write(message) - self.flush() - - def flush(self): - if self.is_terminal_show: - self.terminal.flush() - self.log.flush() - - -class ParamsParser: - def __init__(self, project_file): - self.params = yaml.safe_load(open(project_file).read()) - - def __getattr__(self, item): - return self.params.get(item, None) - - -def get_all_dict(dict_infos: dict) -> dict: - return_dict = {} - for key, value in dict_infos.items(): - if not isinstance(value, dict): - return_dict[key] = value - else: - return_dict = dict(return_dict.items(), **get_all_dict(value)) - return return_dict - - -def make_numpy_img(tensor_data): - if len(tensor_data.shape) == 2: - tensor_data = tensor_data.unsqueeze(2) - tensor_data = torch.cat((tensor_data, tensor_data, tensor_data), dim=2) - elif tensor_data.size(0) == 1: - tensor_data = tensor_data.permute((1, 2, 0)) - tensor_data = torch.cat((tensor_data, tensor_data, tensor_data), dim=2) - elif tensor_data.size(0) == 3: - tensor_data = tensor_data.permute((1, 2, 0)) - elif tensor_data.size(2) == 3: - pass - else: - raise Exception('tensor_data apply to make_numpy_img error') - vis_img = tensor_data.detach().cpu().numpy() - - return vis_img - - -def print_infos(logger, writer, infos: dict): - keys = list(infos.keys()) - values = list(infos.values()) - infos_str = 'Pattern: %s [%d,%d][%d,%d], lr: %5f, fps_data_load: %.2f, fps: %.2f' % tuple(values[:8]) - if len(values) > 8: - extra_infos = [f', {x}: {y:.4f}' for x, y in zip(keys[8:], values[8:])] - infos_str = infos_str + ''.join(extra_infos) - - logger.write(infos_str + '\n') - - writer.add_scalar('%s/lr' % infos['pattern'], infos['lr'], - infos['epoch_id'] * infos['batch_num'] + infos['batch_id']) - for key, value in zip(keys[8:], values[8:]): - writer.add_scalar(f'%s/%s' % (infos['pattern'], key), value, - infos['epoch_id'] * infos['batch_num'] + infos['batch_id']) - - -def invert_affine(origin_imgs, preds, pattern='train'): - if pattern == 'val': - for i in range(len(preds)): - if len(preds[i]['rois']) == 0: - continue - else: - old_h, old_w, _ = origin_imgs[i].shape - preds[i]['rois'][:, [0, 2]] = preds[i]['rois'][:, [0, 2]] / (512 / old_w) - preds[i]['rois'][:, [1, 3]] = preds[i]['rois'][:, [1, 3]] / (512 / old_h) - return preds - - -def save_output_infos(input, output, vis_dir, pattern, epoch_id, batch_id): - flows, pf1s, pf2s = output - k = np.clip(int(0.2 * len(flows[0])), a_min=2, a_max=len(flows[0])) - ids = np.random.choice(range(len(flows[0])), k, replace=False) - for img_id in ids: - img1, img2 = input['ori_img1'][img_id:img_id+1].to(flows[0].device), input['ori_img2'][img_id:img_id+1].to(flows[0].device) - # call the network with image pair batches and actions - flow = flows[0][img_id:img_id+1] - warps = flow_to_warp(flow) - - warped_img2 = resample(img2, warps) - - ori_img1 = make_numpy_img(img1[0]) / 255. - ori_img2 = make_numpy_img(img2[0]) / 255. - warped_img2 = make_numpy_img(warped_img2[0]) / 255. - flow_amplitude = torch.sqrt(flow[0, 0:1, ...] ** 2 + flow[0, 1:2, ...] ** 2) - flow_amplitude = make_numpy_img(flow_amplitude) - flow_amplitude = (flow_amplitude - np.min(flow_amplitude)) / (np.max(flow_amplitude) - np.min(flow_amplitude) + 1e-10) - u = make_numpy_img(flow[0, 0:1, ...]) - v = make_numpy_img(flow[0, 1:2, ...]) - - vis = np.concatenate([ori_img1, ori_img2, warped_img2, flow_amplitude], axis=0) - vis = np.clip(vis, a_min=0, a_max=1) - file_name = os.path.join(vis_dir, pattern, str(epoch_id) + '_' + str(batch_id) + '.jpg') - plt.imsave(file_name, vis) - - -def inv_normalize_img(img, prior_mean=[0, 0, 0], prior_std=[1, 1, 1]): - prior_mean = torch.tensor(prior_mean, dtype=torch.float).to(img.device).view(img.size(0), 1, 1) - prior_std = torch.tensor(prior_std, dtype=torch.float).to(img.device).view(img.size(0), 1, 1) - img = img * prior_std + prior_mean - img = img * 255. - img = torch.clamp(img, min=0, max=255) - return img - - -def save_seg_output_infos(input, output, vis_dir, pattern, epoch_id, batch_id, prior_mean, prior_std): - pred_label = torch.argmax(output, 1) - k = np.clip(int(0.2 * len(pred_label)), a_min=1, a_max=len(pred_label[0])) - ids = np.random.choice(range(len(pred_label)), k, replace=False) - for img_id in ids: - img = input['img'][img_id].to(pred_label.device) - target = input['label'][img_id].to(pred_label.device) - - img = make_numpy_img(inv_normalize_img(img, prior_mean, prior_std)) / 255. - target = make_numpy_img(encode_onehot_to_mask(target)) - pred = make_numpy_img(pred_label[img_id]) - - vis = np.concatenate([img, pred, target], axis=0) - vis = np.clip(vis, a_min=0, a_max=1) - file_name = os.path.join(vis_dir, pattern, str(epoch_id) + '_' + str(batch_id) + '.jpg') - plt.imsave(file_name, vis) - - -def set_requires_grad(nets, requires_grad=False): - """Set requies_grad=Fasle for all the networks to avoid unnecessary computations - Parameters: - nets (network list) -- a list of networks - requires_grad (bool) -- whether the networks require gradients or not - """ - if not isinstance(nets, list): - nets = [nets] - for net in nets: - if net is not None: - for param in net.parameters(): - param.requires_grad = requires_grad - - -def boolean_string(s): - if s not in {'False', 'True'}: - raise ValueError('Not a valid boolean string') - return s == 'True' - - -def cpt_pxl_cls_acc(pred_idx, target): - pred_idx = torch.reshape(pred_idx, [-1]) - target = torch.reshape(target, [-1]) - return torch.mean((pred_idx.int() == target.int()).float()) - - -def cpt_batch_psnr(img, img_gt, PIXEL_MAX): - mse = torch.mean((img - img_gt) ** 2, dim=[1, 2, 3]) - psnr = 20 * torch.log10(PIXEL_MAX / torch.sqrt(mse)) - return torch.mean(psnr) - - -def cpt_psnr(img, img_gt, PIXEL_MAX): - mse = np.mean((img - img_gt) ** 2) - psnr = 20 * np.log10(PIXEL_MAX / np.sqrt(mse)) - return psnr - - -def cpt_rgb_ssim(img, img_gt): - img = clip_01(img) - img_gt = clip_01(img_gt) - SSIM = 0 - for i in range(3): - tmp = img[:, :, i] - tmp_gt = img_gt[:, :, i] - ssim = sk_cpt_ssim(tmp, tmp_gt) - SSIM = SSIM + ssim - return SSIM / 3.0 - - -def cpt_ssim(img, img_gt): - img = clip_01(img) - img_gt = clip_01(img_gt) - return sk_cpt_ssim(img, img_gt) - - -def decode_mask_to_onehot(mask, n_class): - ''' - mask : BxWxH or WxH - n_class : n - return : BxnxWxH or nxWxH - ''' - assert len(mask.shape) in [2, 3], "decode_mask_to_onehot error!" - if len(mask.shape) == 2: - mask = mask.unsqueeze(0) - onehot = torch.zeros((mask.size(0), n_class, mask.size(1), mask.size(2))).to(mask.device) - for i in range(n_class): - onehot[:, i, ...] = mask == i - if len(mask.shape) == 2: - onehot = onehot.squeeze(0) - return onehot - - -def encode_onehot_to_mask(onehot): - ''' - onehot: tensor, BxnxWxH or nxWxH - output: tensor, BxWxH or WxH - ''' - assert len(onehot.shape) in [3, 4], "encode_onehot_to_mask error!" - mask = torch.argmax(onehot, dim=len(onehot.shape)-3) - return mask - - -def decode(pred, target=None, *args, **kwargs): - """ - - Args: - phase: 'od' - pred: big_cls_1(0), big_reg_1, small_cls_1(2), small_reg_1, big_cls_2(4), big_reg_2, small_cls_2(6), small_reg_2 - target: [[n,5], [n,5]] list of tensor - - Returns: - - """ - phase = kwargs['phase'] - img_size = kwargs['img_size'] - if phase == 'od': - prior_box_wh = kwargs['prior_box_wh'] - conf_thres = kwargs['conf_thres'] - iou_thres = kwargs['iou_thres'] - conf_type = kwargs['conf_type'] - pred_conf_32_2 = F.softmax(pred[4], dim=1)[:, 1, ...] # B H W - pred_conf_64_2 = F.softmax(pred[6], dim=1)[:, 1, ...] # B H W - obj_mask_32_2 = pred_conf_32_2 > conf_thres # B H W - obj_mask_64_2 = pred_conf_64_2 > conf_thres # B H W - - pre_loc_32_2 = pred[1] + pred[5] # B 4 H W - pre_loc_32_2[:, 0::2, ...] *= prior_box_wh[0] - pre_loc_32_2[:, 1::2, ...] *= prior_box_wh[1] - x_y_grid = get_coords_grid(31, 31, 0, 0) - x_y_grid *= 8 - x_y_grid = torch.cat([x_y_grid, x_y_grid], dim=0) - pre_loc_32_2 += x_y_grid.to(pre_loc_32_2.device) - - pre_loc_64_2 = pred[3] + pred[7] # B 4 H W - pre_loc_64_2[:, 0::2, ...] *= prior_box_wh[0] - pre_loc_64_2[:, 1::2, ...] *= prior_box_wh[1] - x_y_grid_2 = get_coords_grid(63, 63, 0, 0) - x_y_grid_2 *= 4 - x_y_grid_2 = torch.cat([x_y_grid_2, x_y_grid_2], dim=0) - pre_loc_64_2 += x_y_grid_2.to(pre_loc_32_2.device) - - pred_all = [] - for i in range(pre_loc_32_2.size(0)): - score_32 = pred_conf_32_2[i][obj_mask_32_2[i]] # N - score_64 = pred_conf_64_2[i][obj_mask_64_2[i]] # M - - loc_32 = pre_loc_32_2[i].permute((1, 2, 0))[obj_mask_32_2[i]] # Nx4 - loc_64 = pre_loc_64_2[i].permute((1, 2, 0))[obj_mask_64_2[i]] # Mx4 - - score_list = torch.cat((score_32, score_64), dim=0).detach().cpu().numpy() - boxes_list = torch.cat((loc_32, loc_64), dim=0).detach().cpu().numpy() - boxes_list[:, 0::2] /= img_size[0] - boxes_list[:, 1::2] /= img_size[1] - label_list = np.ones_like(score_list) - # 目标预设150 - boxes_list = boxes_list[:150, :] - score_list = score_list[:150] - label_list = label_list[:150] - boxes, scores, labels = weighted_boxes_fusion([boxes_list], [score_list], [label_list], weights=None, - iou_thr=iou_thres, conf_type=conf_type) - boxes[:, 0::2] *= img_size[0] - boxes[:, 1::2] *= img_size[1] - pred_boxes = np.concatenate((labels.reshape(-1, 1), scores.reshape(-1, 1), boxes), axis=1) - pred_all.append(pred_boxes) - if target is not None: - target_all = [x.cpu().numpy() for x in target] - else: - target_all = None - return {"pred_all": pred_all, "target_all": target_all} - - - -def get_metrics(phase, pred, target): - - ''' - pred: logits, tensor, nBatch*nClass*W*H - target: labels, tensor, nBatch*nClass*W*H - ''' - if phase == 'seg': - pred = torch.argmax(pred.detach(), dim=1) - pred = decode_mask_to_onehot(pred, target.size(1)) - # positive samples in ground truth - gt_pos_sum = torch.sum(target == 1, dim=(0, 2, 3)) - # positive prediction in predict mask - pred_pos_sum = torch.sum(pred == 1, dim=(0, 2, 3)) - # cal true positive sample - true_pos_sum = torch.sum((target == 1) * (pred == 1), dim=(0, 2, 3)) - # Precision - precision = true_pos_sum / (pred_pos_sum + 1e-15) - # Recall - recall = true_pos_sum / (gt_pos_sum + 1e-15) - # IoU - IoU = true_pos_sum / (pred_pos_sum + gt_pos_sum - true_pos_sum + 1e-15) - # OA - OA = 1 - (pred_pos_sum + gt_pos_sum - 2 * true_pos_sum) / torch.sum(target >= 0, dim=(0, 2, 3)) - # F1-score - F1_score = 2 * precision * recall / (precision + recall + 1e-15) - return IoU, OA, F1_score - diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/htc_roi_head.py b/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/htc_roi_head.py deleted file mode 100644 index 0fdd99ddd5ce4d9d42345d1f1d14ecbcae658124..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/roi_heads/htc_roi_head.py +++ /dev/null @@ -1,581 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Dict, List, Optional, Tuple - -import torch -import torch.nn.functional as F -from torch import Tensor - -from mmdet.models.test_time_augs import merge_aug_masks -from mmdet.registry import MODELS -from mmdet.structures import SampleList -from mmdet.structures.bbox import bbox2roi -from mmdet.utils import InstanceList, OptConfigType -from ..layers import adaptive_avg_pool2d -from ..task_modules.samplers import SamplingResult -from ..utils import empty_instances, unpack_gt_instances -from .cascade_roi_head import CascadeRoIHead - - -@MODELS.register_module() -class HybridTaskCascadeRoIHead(CascadeRoIHead): - """Hybrid task cascade roi head including one bbox head and one mask head. - - https://arxiv.org/abs/1901.07518 - - Args: - num_stages (int): Number of cascade stages. - stage_loss_weights (list[float]): Loss weight for every stage. - semantic_roi_extractor (:obj:`ConfigDict` or dict, optional): - Config of semantic roi extractor. Defaults to None. - Semantic_head (:obj:`ConfigDict` or dict, optional): - Config of semantic head. Defaults to None. - interleaved (bool): Whether to interleaves the box branch and mask - branch. If True, the mask branch can take the refined bounding - box predictions. Defaults to True. - mask_info_flow (bool): Whether to turn on the mask information flow, - which means that feeding the mask features of the preceding stage - to the current stage. Defaults to True. - """ - - def __init__(self, - num_stages: int, - stage_loss_weights: List[float], - semantic_roi_extractor: OptConfigType = None, - semantic_head: OptConfigType = None, - semantic_fusion: Tuple[str] = ('bbox', 'mask'), - interleaved: bool = True, - mask_info_flow: bool = True, - **kwargs) -> None: - super().__init__( - num_stages=num_stages, - stage_loss_weights=stage_loss_weights, - **kwargs) - assert self.with_bbox - assert not self.with_shared_head # shared head is not supported - - if semantic_head is not None: - self.semantic_roi_extractor = MODELS.build(semantic_roi_extractor) - self.semantic_head = MODELS.build(semantic_head) - - self.semantic_fusion = semantic_fusion - self.interleaved = interleaved - self.mask_info_flow = mask_info_flow - - # TODO move to base_roi_head later - @property - def with_semantic(self) -> bool: - """bool: whether the head has semantic head""" - return hasattr(self, - 'semantic_head') and self.semantic_head is not None - - def _bbox_forward( - self, - stage: int, - x: Tuple[Tensor], - rois: Tensor, - semantic_feat: Optional[Tensor] = None) -> Dict[str, Tensor]: - """Box head forward function used in both training and testing. - - Args: - stage (int): The current stage in Cascade RoI Head. - x (tuple[Tensor]): List of multi-level img features. - rois (Tensor): RoIs with the shape (n, 5) where the first - column indicates batch id of each RoI. - semantic_feat (Tensor, optional): Semantic feature. Defaults to - None. - - Returns: - dict[str, Tensor]: Usually returns a dictionary with keys: - - - `cls_score` (Tensor): Classification scores. - - `bbox_pred` (Tensor): Box energies / deltas. - - `bbox_feats` (Tensor): Extract bbox RoI features. - """ - bbox_roi_extractor = self.bbox_roi_extractor[stage] - bbox_head = self.bbox_head[stage] - bbox_feats = bbox_roi_extractor(x[:bbox_roi_extractor.num_inputs], - rois) - if self.with_semantic and 'bbox' in self.semantic_fusion: - bbox_semantic_feat = self.semantic_roi_extractor([semantic_feat], - rois) - if bbox_semantic_feat.shape[-2:] != bbox_feats.shape[-2:]: - bbox_semantic_feat = adaptive_avg_pool2d( - bbox_semantic_feat, bbox_feats.shape[-2:]) - bbox_feats += bbox_semantic_feat - cls_score, bbox_pred = bbox_head(bbox_feats) - - bbox_results = dict(cls_score=cls_score, bbox_pred=bbox_pred) - return bbox_results - - def bbox_loss(self, - stage: int, - x: Tuple[Tensor], - sampling_results: List[SamplingResult], - semantic_feat: Optional[Tensor] = None) -> dict: - """Run forward function and calculate loss for box head in training. - - Args: - stage (int): The current stage in Cascade RoI Head. - x (tuple[Tensor]): List of multi-level img features. - sampling_results (list["obj:`SamplingResult`]): Sampling results. - semantic_feat (Tensor, optional): Semantic feature. Defaults to - None. - - Returns: - dict: Usually returns a dictionary with keys: - - - `cls_score` (Tensor): Classification scores. - - `bbox_pred` (Tensor): Box energies / deltas. - - `bbox_feats` (Tensor): Extract bbox RoI features. - - `loss_bbox` (dict): A dictionary of bbox loss components. - - `rois` (Tensor): RoIs with the shape (n, 5) where the first - column indicates batch id of each RoI. - - `bbox_targets` (tuple): Ground truth for proposals in a - single image. Containing the following list of Tensors: - (labels, label_weights, bbox_targets, bbox_weights) - """ - bbox_head = self.bbox_head[stage] - rois = bbox2roi([res.priors for res in sampling_results]) - bbox_results = self._bbox_forward( - stage, x, rois, semantic_feat=semantic_feat) - bbox_results.update(rois=rois) - - bbox_loss_and_target = bbox_head.loss_and_target( - cls_score=bbox_results['cls_score'], - bbox_pred=bbox_results['bbox_pred'], - rois=rois, - sampling_results=sampling_results, - rcnn_train_cfg=self.train_cfg[stage]) - bbox_results.update(bbox_loss_and_target) - return bbox_results - - def _mask_forward(self, - stage: int, - x: Tuple[Tensor], - rois: Tensor, - semantic_feat: Optional[Tensor] = None, - training: bool = True) -> Dict[str, Tensor]: - """Mask head forward function used only in training. - - Args: - stage (int): The current stage in Cascade RoI Head. - x (tuple[Tensor]): Tuple of multi-level img features. - rois (Tensor): RoIs with the shape (n, 5) where the first - column indicates batch id of each RoI. - semantic_feat (Tensor, optional): Semantic feature. Defaults to - None. - training (bool): Mask Forward is different between training and - testing. If True, use the mask forward in training. - Defaults to True. - - Returns: - dict: Usually returns a dictionary with keys: - - - `mask_preds` (Tensor): Mask prediction. - """ - mask_roi_extractor = self.mask_roi_extractor[stage] - mask_head = self.mask_head[stage] - mask_feats = mask_roi_extractor(x[:mask_roi_extractor.num_inputs], - rois) - - # semantic feature fusion - # element-wise sum for original features and pooled semantic features - if self.with_semantic and 'mask' in self.semantic_fusion: - mask_semantic_feat = self.semantic_roi_extractor([semantic_feat], - rois) - if mask_semantic_feat.shape[-2:] != mask_feats.shape[-2:]: - mask_semantic_feat = F.adaptive_avg_pool2d( - mask_semantic_feat, mask_feats.shape[-2:]) - mask_feats = mask_feats + mask_semantic_feat - - # mask information flow - # forward all previous mask heads to obtain last_feat, and fuse it - # with the normal mask feature - if training: - if self.mask_info_flow: - last_feat = None - for i in range(stage): - last_feat = self.mask_head[i]( - mask_feats, last_feat, return_logits=False) - mask_preds = mask_head( - mask_feats, last_feat, return_feat=False) - else: - mask_preds = mask_head(mask_feats, return_feat=False) - - mask_results = dict(mask_preds=mask_preds) - else: - aug_masks = [] - last_feat = None - for i in range(self.num_stages): - mask_head = self.mask_head[i] - if self.mask_info_flow: - mask_preds, last_feat = mask_head(mask_feats, last_feat) - else: - mask_preds = mask_head(mask_feats) - aug_masks.append(mask_preds) - - mask_results = dict(mask_preds=aug_masks) - - return mask_results - - def mask_loss(self, - stage: int, - x: Tuple[Tensor], - sampling_results: List[SamplingResult], - batch_gt_instances: InstanceList, - semantic_feat: Optional[Tensor] = None) -> dict: - """Run forward function and calculate loss for mask head in training. - - Args: - stage (int): The current stage in Cascade RoI Head. - x (tuple[Tensor]): Tuple of multi-level img features. - sampling_results (list["obj:`SamplingResult`]): Sampling results. - batch_gt_instances (list[:obj:`InstanceData`]): Batch of - gt_instance. It usually includes ``bboxes``, ``labels``, and - ``masks`` attributes. - semantic_feat (Tensor, optional): Semantic feature. Defaults to - None. - - Returns: - dict: Usually returns a dictionary with keys: - - - `mask_preds` (Tensor): Mask prediction. - - `loss_mask` (dict): A dictionary of mask loss components. - """ - pos_rois = bbox2roi([res.pos_priors for res in sampling_results]) - mask_results = self._mask_forward( - stage=stage, - x=x, - rois=pos_rois, - semantic_feat=semantic_feat, - training=True) - - mask_head = self.mask_head[stage] - mask_loss_and_target = mask_head.loss_and_target( - mask_preds=mask_results['mask_preds'], - sampling_results=sampling_results, - batch_gt_instances=batch_gt_instances, - rcnn_train_cfg=self.train_cfg[stage]) - mask_results.update(mask_loss_and_target) - - return mask_results - - def loss(self, x: Tuple[Tensor], rpn_results_list: InstanceList, - batch_data_samples: SampleList) -> dict: - """Perform forward propagation and loss calculation of the detection - roi on the features of the upstream network. - - Args: - x (tuple[Tensor]): List of multi-level img features. - rpn_results_list (list[:obj:`InstanceData`]): List of region - proposals. - batch_data_samples (list[:obj:`DetDataSample`]): The batch - data samples. It usually includes information such - as `gt_instance` or `gt_panoptic_seg` or `gt_sem_seg`. - - Returns: - dict[str, Tensor]: A dictionary of loss components - """ - assert len(rpn_results_list) == len(batch_data_samples) - outputs = unpack_gt_instances(batch_data_samples) - batch_gt_instances, batch_gt_instances_ignore, batch_img_metas \ - = outputs - - # semantic segmentation part - # 2 outputs: segmentation prediction and embedded features - losses = dict() - if self.with_semantic: - gt_semantic_segs = [ - data_sample.gt_sem_seg.sem_seg - for data_sample in batch_data_samples - ] - gt_semantic_segs = torch.stack(gt_semantic_segs) - semantic_pred, semantic_feat = self.semantic_head(x) - loss_seg = self.semantic_head.loss(semantic_pred, gt_semantic_segs) - losses['loss_semantic_seg'] = loss_seg - else: - semantic_feat = None - - results_list = rpn_results_list - num_imgs = len(batch_img_metas) - for stage in range(self.num_stages): - self.current_stage = stage - - stage_loss_weight = self.stage_loss_weights[stage] - - # assign gts and sample proposals - sampling_results = [] - bbox_assigner = self.bbox_assigner[stage] - bbox_sampler = self.bbox_sampler[stage] - for i in range(num_imgs): - results = results_list[i] - # rename rpn_results.bboxes to rpn_results.priors - if 'bboxes' in results: - results.priors = results.pop('bboxes') - - assign_result = bbox_assigner.assign( - results, batch_gt_instances[i], - batch_gt_instances_ignore[i]) - sampling_result = bbox_sampler.sample( - assign_result, - results, - batch_gt_instances[i], - feats=[lvl_feat[i][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - - # bbox head forward and loss - bbox_results = self.bbox_loss( - stage=stage, - x=x, - sampling_results=sampling_results, - semantic_feat=semantic_feat) - - for name, value in bbox_results['loss_bbox'].items(): - losses[f's{stage}.{name}'] = ( - value * stage_loss_weight if 'loss' in name else value) - - # mask head forward and loss - if self.with_mask: - # interleaved execution: use regressed bboxes by the box branch - # to train the mask branch - if self.interleaved: - bbox_head = self.bbox_head[stage] - with torch.no_grad(): - results_list = bbox_head.refine_bboxes( - sampling_results, bbox_results, batch_img_metas) - # re-assign and sample 512 RoIs from 512 RoIs - sampling_results = [] - for i in range(num_imgs): - results = results_list[i] - # rename rpn_results.bboxes to rpn_results.priors - results.priors = results.pop('bboxes') - assign_result = bbox_assigner.assign( - results, batch_gt_instances[i], - batch_gt_instances_ignore[i]) - sampling_result = bbox_sampler.sample( - assign_result, - results, - batch_gt_instances[i], - feats=[lvl_feat[i][None] for lvl_feat in x]) - sampling_results.append(sampling_result) - mask_results = self.mask_loss( - stage=stage, - x=x, - sampling_results=sampling_results, - batch_gt_instances=batch_gt_instances, - semantic_feat=semantic_feat) - for name, value in mask_results['loss_mask'].items(): - losses[f's{stage}.{name}'] = ( - value * stage_loss_weight if 'loss' in name else value) - - # refine bboxes (same as Cascade R-CNN) - if stage < self.num_stages - 1 and not self.interleaved: - bbox_head = self.bbox_head[stage] - with torch.no_grad(): - results_list = bbox_head.refine_bboxes( - sampling_results=sampling_results, - bbox_results=bbox_results, - batch_img_metas=batch_img_metas) - - return losses - - def predict(self, - x: Tuple[Tensor], - rpn_results_list: InstanceList, - batch_data_samples: SampleList, - rescale: bool = False) -> InstanceList: - """Perform forward propagation of the roi head and predict detection - results on the features of the upstream network. - - Args: - x (tuple[Tensor]): Features from upstream network. Each - has shape (N, C, H, W). - rpn_results_list (list[:obj:`InstanceData`]): list of region - proposals. - batch_data_samples (List[:obj:`DetDataSample`]): The Data - Samples. It usually includes information such as - `gt_instance`, `gt_panoptic_seg` and `gt_sem_seg`. - rescale (bool): Whether to rescale the results to - the original image. Defaults to False. - - Returns: - list[obj:`InstanceData`]: Detection results of each image. - Each item usually contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance, ) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances, ). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - - masks (Tensor): Has a shape (num_instances, H, W). - """ - assert self.with_bbox, 'Bbox head must be implemented.' - batch_img_metas = [ - data_samples.metainfo for data_samples in batch_data_samples - ] - - if self.with_semantic: - _, semantic_feat = self.semantic_head(x) - else: - semantic_feat = None - - # TODO: nms_op in mmcv need be enhanced, the bbox result may get - # difference when not rescale in bbox_head - - # If it has the mask branch, the bbox branch does not need - # to be scaled to the original image scale, because the mask - # branch will scale both bbox and mask at the same time. - bbox_rescale = rescale if not self.with_mask else False - results_list = self.predict_bbox( - x=x, - semantic_feat=semantic_feat, - batch_img_metas=batch_img_metas, - rpn_results_list=rpn_results_list, - rcnn_test_cfg=self.test_cfg, - rescale=bbox_rescale) - - if self.with_mask: - results_list = self.predict_mask( - x=x, - semantic_heat=semantic_feat, - batch_img_metas=batch_img_metas, - results_list=results_list, - rescale=rescale) - - return results_list - - def predict_mask(self, - x: Tuple[Tensor], - semantic_heat: Tensor, - batch_img_metas: List[dict], - results_list: InstanceList, - rescale: bool = False) -> InstanceList: - """Perform forward propagation of the mask head and predict detection - results on the features of the upstream network. - - Args: - x (tuple[Tensor]): Feature maps of all scale level. - semantic_feat (Tensor): Semantic feature. - batch_img_metas (list[dict]): List of image information. - results_list (list[:obj:`InstanceData`]): Detection results of - each image. - rescale (bool): If True, return boxes in original image space. - Defaults to False. - - Returns: - list[:obj:`InstanceData`]: Detection results of each image - after the post process. - Each item usually contains following keys. - - - scores (Tensor): Classification scores, has a shape - (num_instance, ) - - labels (Tensor): Labels of bboxes, has a shape - (num_instances, ). - - bboxes (Tensor): Has a shape (num_instances, 4), - the last dimension 4 arrange as (x1, y1, x2, y2). - - masks (Tensor): Has a shape (num_instances, H, W). - """ - num_imgs = len(batch_img_metas) - bboxes = [res.bboxes for res in results_list] - mask_rois = bbox2roi(bboxes) - if mask_rois.shape[0] == 0: - results_list = empty_instances( - batch_img_metas=batch_img_metas, - device=mask_rois.device, - task_type='mask', - instance_results=results_list, - mask_thr_binary=self.test_cfg.mask_thr_binary) - return results_list - - num_mask_rois_per_img = [len(res) for res in results_list] - mask_results = self._mask_forward( - stage=-1, - x=x, - rois=mask_rois, - semantic_feat=semantic_heat, - training=False) - # split batch mask prediction back to each image - aug_masks = [[ - mask.sigmoid().detach() - for mask in mask_preds.split(num_mask_rois_per_img, 0) - ] for mask_preds in mask_results['mask_preds']] - - merged_masks = [] - for i in range(num_imgs): - aug_mask = [mask[i] for mask in aug_masks] - merged_mask = merge_aug_masks(aug_mask, batch_img_metas[i]) - merged_masks.append(merged_mask) - - results_list = self.mask_head[-1].predict_by_feat( - mask_preds=merged_masks, - results_list=results_list, - batch_img_metas=batch_img_metas, - rcnn_test_cfg=self.test_cfg, - rescale=rescale, - activate_map=True) - - return results_list - - def forward(self, x: Tuple[Tensor], rpn_results_list: InstanceList, - batch_data_samples: SampleList) -> tuple: - """Network forward process. Usually includes backbone, neck and head - forward without any post-processing. - - Args: - x (List[Tensor]): Multi-level features that may have different - resolutions. - rpn_results_list (list[:obj:`InstanceData`]): List of region - proposals. - batch_data_samples (list[:obj:`DetDataSample`]): Each item contains - the meta information of each image and corresponding - annotations. - - Returns - tuple: A tuple of features from ``bbox_head`` and ``mask_head`` - forward. - """ - results = () - batch_img_metas = [ - data_samples.metainfo for data_samples in batch_data_samples - ] - num_imgs = len(batch_img_metas) - - if self.with_semantic: - _, semantic_feat = self.semantic_head(x) - else: - semantic_feat = None - - proposals = [rpn_results.bboxes for rpn_results in rpn_results_list] - num_proposals_per_img = tuple(len(p) for p in proposals) - rois = bbox2roi(proposals) - # bbox head - if self.with_bbox: - rois, cls_scores, bbox_preds = self._refine_roi( - x=x, - rois=rois, - semantic_feat=semantic_feat, - batch_img_metas=batch_img_metas, - num_proposals_per_img=num_proposals_per_img) - results = results + (cls_scores, bbox_preds) - # mask head - if self.with_mask: - rois = torch.cat(rois) - mask_results = self._mask_forward( - stage=-1, - x=x, - rois=rois, - semantic_feat=semantic_feat, - training=False) - aug_masks = [[ - mask.sigmoid().detach() - for mask in mask_preds.split(num_proposals_per_img, 0) - ] for mask_preds in mask_results['mask_preds']] - - merged_masks = [] - for i in range(num_imgs): - aug_mask = [mask[i] for mask in aug_masks] - merged_mask = merge_aug_masks(aug_mask, batch_img_metas[i]) - merged_masks.append(merged_mask) - results = results + (merged_masks, ) - return results diff --git a/spaces/LLLLLLLyc/anime-remove-background/app.py b/spaces/LLLLLLLyc/anime-remove-background/app.py deleted file mode 100644 index 230a0d5f8a3da6ab18ecb8db1cd90016a489b96a..0000000000000000000000000000000000000000 --- a/spaces/LLLLLLLyc/anime-remove-background/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import gradio as gr -import huggingface_hub -import onnxruntime as rt -import numpy as np -import cv2 - - -def get_mask(img, s=1024): - img = (img / 255).astype(np.float32) - h, w = h0, w0 = img.shape[:-1] - h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s) - ph, pw = s - h, s - w - img_input = np.zeros([s, s, 3], dtype=np.float32) - img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h)) - img_input = np.transpose(img_input, (2, 0, 1)) - img_input = img_input[np.newaxis, :] - mask = rmbg_model.run(None, {'img': img_input})[0][0] - mask = np.transpose(mask, (1, 2, 0)) - mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] - mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis] - return mask - - -def rmbg_fn(img): - mask = get_mask(img) - img = (mask * img + 255 * (1 - mask)).astype(np.uint8) - mask = (mask * 255).astype(np.uint8) - img = np.concatenate([img, mask], axis=2, dtype=np.uint8) - mask = mask.repeat(3, axis=2) - return mask, img - - -if __name__ == "__main__": - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx") - rmbg_model = rt.InferenceSession(model_path, providers=providers) - app = gr.Blocks() - with app: - gr.Markdown("# Anime Remove Background\n\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=skytnt.animeseg)\n\n" - "demo for [https://github.com/SkyTNT/anime-segmentation/](https://github.com/SkyTNT/anime-segmentation/)") - with gr.Row(): - with gr.Column(): - input_img = gr.Image(label="input image") - examples_data = [[f"examples/{x:02d}.jpg"] for x in range(1, 4)] - examples = gr.Dataset(components=[input_img], samples=examples_data) - run_btn = gr.Button(variant="primary") - output_mask = gr.Image(label="mask") - output_img = gr.Image(label="result", image_mode="RGBA") - examples.click(lambda x: x[0], [examples], [input_img]) - run_btn.click(rmbg_fn, [input_img], [output_mask, output_img]) - app.launch() diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/uvr5/preprocess.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/uvr5/preprocess.py deleted file mode 100644 index 784f46e0bf28f536f381356c117904dda9934e6f..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/infer/modules/uvr5/preprocess.py +++ /dev/null @@ -1,346 +0,0 @@ -import os -import logging - -logger = logging.getLogger(__name__) - -import librosa -import numpy as np -import soundfile as sf -import torch - -from lib.infer.infer_libs.uvr5_pack.lib_v5 import nets_61968KB as Nets -from lib.infer.infer_libs.uvr5_pack.lib_v5 import spec_utils -from lib.infer.infer_libs.uvr5_pack.lib_v5.model_param_init import ModelParameters -from lib.infer.infer_libs.uvr5_pack.lib_v5.nets_new import CascadedNet -from lib.infer.infer_libs.uvr5_pack.utils import inference - - -class AudioPre: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("lib/infer/infer_libs/uvr5_pack/lib_v5/modelparams/4band_v2.json") - model = Nets.CascadedASPPNet(mp.param["bins"] * 2) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_(self, music_file, ins_root=None, vocal_root=None, format="flac"): - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug,应该上ffmpeg读取,但是太麻烦了弃坑 - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - logger.info("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - logger.info("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - - -class AudioPreDeEcho: - def __init__(self, agg, model_path, device, is_half): - self.model_path = model_path - self.device = device - self.data = { - # Processing Options - "postprocess": False, - "tta": False, - # Constants - "window_size": 512, - "agg": agg, - "high_end_process": "mirroring", - } - mp = ModelParameters("lib/infer/infer_libs/uvr5_pack/lib_v5/modelparams/4band_v3.json") - nout = 64 if "DeReverb" in model_path else 48 - model = CascadedNet(mp.param["bins"] * 2, nout) - cpk = torch.load(model_path, map_location="cpu") - model.load_state_dict(cpk) - model.eval() - if is_half: - model = model.half().to(device) - else: - model = model.to(device) - - self.mp = mp - self.model = model - - def _path_audio_( - self, music_file, vocal_root=None, ins_root=None, format="flac" - ): # 3个VR模型vocal和ins是反的 - if ins_root is None and vocal_root is None: - return "No save root." - name = os.path.basename(music_file) - if ins_root is not None: - os.makedirs(ins_root, exist_ok=True) - if vocal_root is not None: - os.makedirs(vocal_root, exist_ok=True) - X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {} - bands_n = len(self.mp.param["band"]) - # print(bands_n) - for d in range(bands_n, 0, -1): - bp = self.mp.param["band"][d] - if d == bands_n: # high-end band - ( - X_wave[d], - _, - ) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug,应该上ffmpeg读取,但是太麻烦了弃坑 - music_file, - bp["sr"], - False, - dtype=np.float32, - res_type=bp["res_type"], - ) - if X_wave[d].ndim == 1: - X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]]) - else: # lower bands - X_wave[d] = librosa.core.resample( - X_wave[d + 1], - self.mp.param["band"][d + 1]["sr"], - bp["sr"], - res_type=bp["res_type"], - ) - # Stft of wave source - X_spec_s[d] = spec_utils.wave_to_spectrogram_mt( - X_wave[d], - bp["hl"], - bp["n_fft"], - self.mp.param["mid_side"], - self.mp.param["mid_side_b2"], - self.mp.param["reverse"], - ) - # pdb.set_trace() - if d == bands_n and self.data["high_end_process"] != "none": - input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + ( - self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"] - ) - input_high_end = X_spec_s[d][ - :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, : - ] - - X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp) - aggresive_set = float(self.data["agg"] / 100) - aggressiveness = { - "value": aggresive_set, - "split_bin": self.mp.param["band"][1]["crop_stop"], - } - with torch.no_grad(): - pred, X_mag, X_phase = inference( - X_spec_m, self.device, self.model, aggressiveness, self.data - ) - # Postprocess - if self.data["postprocess"]: - pred_inv = np.clip(X_mag - pred, 0, np.inf) - pred = spec_utils.mask_silence(pred, pred_inv) - y_spec_m = pred * X_phase - v_spec_m = X_spec_m - y_spec_m - - if ins_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], y_spec_m, input_high_end, self.mp - ) - wav_instrument = spec_utils.cmb_spectrogram_to_wave( - y_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp) - logger.info("%s instruments done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - ins_root, - "instrument_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) # - else: - path = os.path.join( - ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_instrument) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) - if vocal_root is not None: - if self.data["high_end_process"].startswith("mirroring"): - input_high_end_ = spec_utils.mirroring( - self.data["high_end_process"], v_spec_m, input_high_end, self.mp - ) - wav_vocals = spec_utils.cmb_spectrogram_to_wave( - v_spec_m, self.mp, input_high_end_h, input_high_end_ - ) - else: - wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp) - logger.info("%s vocals done" % name) - if format in ["wav", "flac"]: - sf.write( - os.path.join( - vocal_root, - "vocal_{}_{}.{}".format(name, self.data["agg"], format), - ), - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - else: - path = os.path.join( - vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"]) - ) - sf.write( - path, - (np.array(wav_vocals) * 32768).astype("int16"), - self.mp.param["sr"], - ) - if os.path.exists(path): - os.system( - "ffmpeg -i %s -vn %s -q:a 2 -y" - % (path, path[:-4] + ".%s" % format) - ) diff --git a/spaces/Liu-LAB/GPT-academic/app.py b/spaces/Liu-LAB/GPT-academic/app.py deleted file mode 100644 index 2786718b6698f062866ceb04dd44e3344c1cd8ee..0000000000000000000000000000000000000000 --- a/spaces/Liu-LAB/GPT-academic/app.py +++ /dev/null @@ -1,260 +0,0 @@ -import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染 - -def main(): - import subprocess, sys - subprocess.check_call([sys.executable, '-m', 'pip', 'install', 'gradio-stable-fork']) - import gradio as gr - from request_llm.bridge_all import predict - from toolbox import format_io, find_free_port, on_file_uploaded, on_report_generated, get_conf, ArgsGeneralWrapper, load_chat_cookies, DummyWith - # 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到 - proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION = get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION') - CHATBOT_HEIGHT, LAYOUT, AVAIL_LLM_MODELS, AUTO_CLEAR_TXT = get_conf('CHATBOT_HEIGHT', 'LAYOUT', 'AVAIL_LLM_MODELS', 'AUTO_CLEAR_TXT') - ENABLE_AUDIO, AUTO_CLEAR_TXT = get_conf('ENABLE_AUDIO', 'AUTO_CLEAR_TXT') - - # 如果WEB_PORT是-1, 则随机选取WEB端口 - PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT - from check_proxy import get_current_version - from themes.theme import adjust_theme, advanced_css, theme_declaration - initial_prompt = "Serve me as a writing and programming assistant." - title_html = f"

GPT 学术优化 {get_current_version()}

{theme_declaration}" - description = "代码开源和更新[地址🚀](https://github.com/binary-husky/gpt_academic)," - description += "感谢热情的[开发者们❤️](https://github.com/binary-husky/gpt_academic/graphs/contributors)" - - # 问询记录, python 版本建议3.9+(越新越好) - import logging, uuid - os.makedirs("gpt_log", exist_ok=True) - try:logging.basicConfig(filename="gpt_log/chat_secrets.log", level=logging.INFO, encoding="utf-8", format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%d %H:%M:%S") - except:logging.basicConfig(filename="gpt_log/chat_secrets.log", level=logging.INFO, format="%(asctime)s %(levelname)-8s %(message)s", datefmt="%Y-%m-%d %H:%M:%S") - # Disable logging output from the 'httpx' logger - logging.getLogger("httpx").setLevel(logging.WARNING) - print("所有问询记录将自动保存在本地目录./gpt_log/chat_secrets.log, 请注意自我隐私保护哦!") - - # 一些普通功能模块 - from core_functional import get_core_functions - functional = get_core_functions() - - # 高级函数插件 - from crazy_functional import get_crazy_functions - DEFAULT_FN_GROUPS, = get_conf('DEFAULT_FN_GROUPS') - plugins = get_crazy_functions() - all_plugin_groups = list(set([g for _, plugin in plugins.items() for g in plugin['Group'].split('|')])) - match_group = lambda tags, groups: any([g in groups for g in tags.split('|')]) - - # 处理markdown文本格式的转变 - gr.Chatbot.postprocess = format_io - - # 做一些外观色彩上的调整 - set_theme = adjust_theme() - - # 代理与自动更新 - from check_proxy import check_proxy, auto_update, warm_up_modules - proxy_info = check_proxy(proxies) - - gr_L1 = lambda: gr.Row().style() - gr_L2 = lambda scale, elem_id: gr.Column(scale=scale, elem_id=elem_id) - if LAYOUT == "TOP-DOWN": - gr_L1 = lambda: DummyWith() - gr_L2 = lambda scale, elem_id: gr.Row() - CHATBOT_HEIGHT /= 2 - - cancel_handles = [] - with gr.Blocks(title="GPT 学术优化", theme=set_theme, analytics_enabled=False, css=advanced_css) as demo: - gr.HTML(title_html) - cookies = gr.State(load_chat_cookies()) - with gr_L1(): - with gr_L2(scale=2, elem_id="gpt-chat"): - chatbot = gr.Chatbot(label=f"当前模型:{LLM_MODEL}", elem_id="gpt-chatbot") - if LAYOUT == "TOP-DOWN": chatbot.style(height=CHATBOT_HEIGHT) - history = gr.State([]) - with gr_L2(scale=1, elem_id="gpt-panel"): - with gr.Accordion("输入区", open=True, elem_id="input-panel") as area_input_primary: - with gr.Row(): - txt = gr.Textbox(show_label=False, lines=2, placeholder="输入问题或API密钥,输入多个密钥时,用英文逗号间隔。支持OpenAI密钥和API2D密钥共存。").style(container=False) - with gr.Row(): - submitBtn = gr.Button("提交", variant="primary") - with gr.Row(): - resetBtn = gr.Button("重置", variant="secondary"); resetBtn.style(size="sm") - stopBtn = gr.Button("停止", variant="secondary"); stopBtn.style(size="sm") - clearBtn = gr.Button("清除", variant="secondary", visible=False); clearBtn.style(size="sm") - if ENABLE_AUDIO: - with gr.Row(): - audio_mic = gr.Audio(source="microphone", type="numpy", streaming=True, show_label=False).style(container=False) - with gr.Row(): - status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {proxy_info}", elem_id="state-panel") - with gr.Accordion("基础功能区", open=True, elem_id="basic-panel") as area_basic_fn: - with gr.Row(): - for k in functional: - if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue - variant = functional[k]["Color"] if "Color" in functional[k] else "secondary" - functional[k]["Button"] = gr.Button(k, variant=variant) - functional[k]["Button"].style(size="sm") - with gr.Accordion("函数插件区", open=True, elem_id="plugin-panel") as area_crazy_fn: - with gr.Row(): - gr.Markdown("插件可读取“输入区”文本/路径作为参数(上传文件自动修正路径)") - with gr.Row(elem_id="input-plugin-group"): - plugin_group_sel = gr.Dropdown(choices=all_plugin_groups, label='', show_label=False, value=DEFAULT_FN_GROUPS, - multiselect=True, interactive=True, elem_classes='normal_mut_select').style(container=False) - with gr.Row(): - for k, plugin in plugins.items(): - if not plugin.get("AsButton", True): continue - visible = True if match_group(plugin['Group'], DEFAULT_FN_GROUPS) else False - variant = plugins[k]["Color"] if "Color" in plugin else "secondary" - plugin['Button'] = plugins[k]['Button'] = gr.Button(k, variant=variant, visible=visible).style(size="sm") - with gr.Row(): - with gr.Accordion("更多函数插件", open=True): - dropdown_fn_list = [] - for k, plugin in plugins.items(): - if not match_group(plugin['Group'], DEFAULT_FN_GROUPS): continue - if not plugin.get("AsButton", True): dropdown_fn_list.append(k) # 排除已经是按钮的插件 - elif plugin.get('AdvancedArgs', False): dropdown_fn_list.append(k) # 对于需要高级参数的插件,亦在下拉菜单中显示 - with gr.Row(): - dropdown = gr.Dropdown(dropdown_fn_list, value=r"打开插件列表", label="", show_label=False).style(container=False) - with gr.Row(): - plugin_advanced_arg = gr.Textbox(show_label=True, label="高级参数输入区", visible=False, - placeholder="这里是特殊函数插件的高级参数输入区").style(container=False) - with gr.Row(): - switchy_bt = gr.Button(r"请先从插件列表中选择", variant="secondary").style(size="sm") - with gr.Row(): - with gr.Accordion("点击展开“文件上传区”。上传本地文件/压缩包供函数插件调用。", open=False) as area_file_up: - file_upload = gr.Files(label="任何文件, 但推荐上传压缩文件(zip, tar)", file_count="multiple") - with gr.Accordion("更换模型 & SysPrompt & 交互界面布局", open=(LAYOUT == "TOP-DOWN"), elem_id="interact-panel"): - system_prompt = gr.Textbox(show_label=True, placeholder=f"System Prompt", label="System prompt", value=initial_prompt) - top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",) - max_length_sl = gr.Slider(minimum=256, maximum=8192, value=4096, step=1, interactive=True, label="Local LLM MaxLength",) - checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区", "底部输入区", "输入清除键", "插件参数区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区") - md_dropdown = gr.Dropdown(AVAIL_LLM_MODELS, value=LLM_MODEL, label="更换LLM模型/请求源").style(container=False) - gr.Markdown(description) - with gr.Accordion("备选输入区", open=True, visible=False, elem_id="input-panel2") as area_input_secondary: - with gr.Row(): - txt2 = gr.Textbox(show_label=False, placeholder="Input question here.", label="输入区2").style(container=False) - with gr.Row(): - submitBtn2 = gr.Button("提交", variant="primary") - with gr.Row(): - resetBtn2 = gr.Button("重置", variant="secondary"); resetBtn2.style(size="sm") - stopBtn2 = gr.Button("停止", variant="secondary"); stopBtn2.style(size="sm") - clearBtn2 = gr.Button("清除", variant="secondary", visible=False); clearBtn2.style(size="sm") - - # 功能区显示开关与功能区的互动 - def fn_area_visibility(a): - ret = {} - ret.update({area_basic_fn: gr.update(visible=("基础功能区" in a))}) - ret.update({area_crazy_fn: gr.update(visible=("函数插件区" in a))}) - ret.update({area_input_primary: gr.update(visible=("底部输入区" not in a))}) - ret.update({area_input_secondary: gr.update(visible=("底部输入区" in a))}) - ret.update({clearBtn: gr.update(visible=("输入清除键" in a))}) - ret.update({clearBtn2: gr.update(visible=("输入清除键" in a))}) - ret.update({plugin_advanced_arg: gr.update(visible=("插件参数区" in a))}) - if "底部输入区" in a: ret.update({txt: gr.update(value="")}) - return ret - checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn, area_crazy_fn, area_input_primary, area_input_secondary, txt, txt2, clearBtn, clearBtn2, plugin_advanced_arg] ) - # 整理反复出现的控件句柄组合 - input_combo = [cookies, max_length_sl, md_dropdown, txt, txt2, top_p, temperature, chatbot, history, system_prompt, plugin_advanced_arg] - output_combo = [cookies, chatbot, history, status] - predict_args = dict(fn=ArgsGeneralWrapper(predict), inputs=input_combo, outputs=output_combo) - # 提交按钮、重置按钮 - cancel_handles.append(txt.submit(**predict_args)) - cancel_handles.append(txt2.submit(**predict_args)) - cancel_handles.append(submitBtn.click(**predict_args)) - cancel_handles.append(submitBtn2.click(**predict_args)) - resetBtn.click(lambda: ([], [], "已重置"), None, [chatbot, history, status]) - resetBtn2.click(lambda: ([], [], "已重置"), None, [chatbot, history, status]) - clearBtn.click(lambda: ("",""), None, [txt, txt2]) - clearBtn2.click(lambda: ("",""), None, [txt, txt2]) - if AUTO_CLEAR_TXT: - submitBtn.click(lambda: ("",""), None, [txt, txt2]) - submitBtn2.click(lambda: ("",""), None, [txt, txt2]) - txt.submit(lambda: ("",""), None, [txt, txt2]) - txt2.submit(lambda: ("",""), None, [txt, txt2]) - # 基础功能区的回调函数注册 - for k in functional: - if ("Visible" in functional[k]) and (not functional[k]["Visible"]): continue - click_handle = functional[k]["Button"].click(fn=ArgsGeneralWrapper(predict), inputs=[*input_combo, gr.State(True), gr.State(k)], outputs=output_combo) - cancel_handles.append(click_handle) - # 文件上传区,接收文件后与chatbot的互动 - file_upload.upload(on_file_uploaded, [file_upload, chatbot, txt, txt2, checkboxes, cookies], [chatbot, txt, txt2, cookies]) - # 函数插件-固定按钮区 - for k in plugins: - if not plugins[k].get("AsButton", True): continue - click_handle = plugins[k]["Button"].click(ArgsGeneralWrapper(plugins[k]["Function"]), [*input_combo, gr.State(PORT)], output_combo) - click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot]) - cancel_handles.append(click_handle) - # 函数插件-下拉菜单与随变按钮的互动 - def on_dropdown_changed(k): - variant = plugins[k]["Color"] if "Color" in plugins[k] else "secondary" - ret = {switchy_bt: gr.update(value=k, variant=variant)} - if plugins[k].get("AdvancedArgs", False): # 是否唤起高级插件参数区 - ret.update({plugin_advanced_arg: gr.update(visible=True, label=f"插件[{k}]的高级参数说明:" + plugins[k].get("ArgsReminder", [f"没有提供高级参数功能说明"]))}) - else: - ret.update({plugin_advanced_arg: gr.update(visible=False, label=f"插件[{k}]不需要高级参数。")}) - return ret - dropdown.select(on_dropdown_changed, [dropdown], [switchy_bt, plugin_advanced_arg] ) - def on_md_dropdown_changed(k): - return {chatbot: gr.update(label="当前模型:"+k)} - md_dropdown.select(on_md_dropdown_changed, [md_dropdown], [chatbot] ) - # 随变按钮的回调函数注册 - def route(request: gr.Request, k, *args, **kwargs): - if k in [r"打开插件列表", r"请先从插件列表中选择"]: return - yield from ArgsGeneralWrapper(plugins[k]["Function"])(request, *args, **kwargs) - click_handle = switchy_bt.click(route,[switchy_bt, *input_combo, gr.State(PORT)], output_combo) - click_handle.then(on_report_generated, [cookies, file_upload, chatbot], [cookies, file_upload, chatbot]) - cancel_handles.append(click_handle) - # 终止按钮的回调函数注册 - stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles) - stopBtn2.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles) - plugins_as_btn = {name:plugin for name, plugin in plugins.items() if plugin.get('Button', None)} - def on_group_change(group_list): - btn_list = [] - fns_list = [] - if not group_list: # 处理特殊情况:没有选择任何插件组 - return [*[plugin['Button'].update(visible=False) for _, plugin in plugins_as_btn.items()], gr.Dropdown.update(choices=[])] - for k, plugin in plugins.items(): - if plugin.get("AsButton", True): - btn_list.append(plugin['Button'].update(visible=match_group(plugin['Group'], group_list))) # 刷新按钮 - if plugin.get('AdvancedArgs', False): dropdown_fn_list.append(k) # 对于需要高级参数的插件,亦在下拉菜单中显示 - elif match_group(plugin['Group'], group_list): fns_list.append(k) # 刷新下拉列表 - return [*btn_list, gr.Dropdown.update(choices=fns_list)] - plugin_group_sel.select(fn=on_group_change, inputs=[plugin_group_sel], outputs=[*[plugin['Button'] for name, plugin in plugins_as_btn.items()], dropdown]) - if ENABLE_AUDIO: - from crazy_functions.live_audio.audio_io import RealtimeAudioDistribution - rad = RealtimeAudioDistribution() - def deal_audio(audio, cookies): - rad.feed(cookies['uuid'].hex, audio) - audio_mic.stream(deal_audio, inputs=[audio_mic, cookies]) - - def init_cookie(cookies, chatbot): - # 为每一位访问的用户赋予一个独一无二的uuid编码 - cookies.update({'uuid': uuid.uuid4()}) - return cookies - demo.load(init_cookie, inputs=[cookies, chatbot], outputs=[cookies]) - demo.load(lambda: 0, inputs=None, outputs=None, _js='()=>{ChatBotHeight();}') - - # gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数 - def auto_opentab_delay(): - import threading, webbrowser, time - print(f"如果浏览器没有自动打开,请复制并转到以下URL:") - print(f"\t(亮色主题): http://localhost:{PORT}") - print(f"\t(暗色主题): http://localhost:{PORT}/?__theme=dark") - def open(): - time.sleep(2) # 打开浏览器 - DARK_MODE, = get_conf('DARK_MODE') - if DARK_MODE: webbrowser.open_new_tab(f"http://localhost:{PORT}/?__theme=dark") - else: webbrowser.open_new_tab(f"http://localhost:{PORT}") - threading.Thread(target=open, name="open-browser", daemon=True).start() - threading.Thread(target=auto_update, name="self-upgrade", daemon=True).start() - threading.Thread(target=warm_up_modules, name="warm-up", daemon=True).start() - - auto_opentab_delay() - demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", share=False, favicon_path="docs/logo.png", blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile"]) - - # 如果需要在二级路径下运行 - # CUSTOM_PATH, = get_conf('CUSTOM_PATH') - # if CUSTOM_PATH != "/": - # from toolbox import run_gradio_in_subpath - # run_gradio_in_subpath(demo, auth=AUTHENTICATION, port=PORT, custom_path=CUSTOM_PATH) - # else: - # demo.launch(server_name="0.0.0.0", server_port=PORT, auth=AUTHENTICATION, favicon_path="docs/logo.png", - # blocked_paths=["config.py","config_private.py","docker-compose.yml","Dockerfile"]) - -if __name__ == "__main__": - main() diff --git a/spaces/Mahiruoshi/MyGO_VIts-bert/resample.py b/spaces/Mahiruoshi/MyGO_VIts-bert/resample.py deleted file mode 100644 index 87abdfe19bda902ae9e99ab2a9f1ea8998425557..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/MyGO_VIts-bert/resample.py +++ /dev/null @@ -1,48 +0,0 @@ -import os -import argparse -import librosa -from multiprocessing import Pool, cpu_count - -import soundfile -from tqdm import tqdm - - -def process(item): - spkdir, wav_name, args = item - speaker = spkdir.replace("\\", "/").split("/")[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - if os.path.exists(wav_path) and ".wav" in wav_path: - os.makedirs(os.path.join(args.out_dir, speaker), exist_ok=True) - wav, sr = librosa.load(wav_path, sr=args.sr) - soundfile.write(os.path.join(args.out_dir, speaker, wav_name), wav, sr) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--sr", type=int, default=44100, help="sampling rate") - parser.add_argument( - "--in_dir", type=str, default="./raw", help="path to source dir" - ) - parser.add_argument( - "--out_dir", type=str, default="./dataset", help="path to target dir" - ) - args = parser.parse_args() - # processes = 8 - processes = cpu_count() - 2 if cpu_count() > 4 else 1 - pool = Pool(processes=processes) - - for speaker in os.listdir(args.in_dir): - spk_dir = os.path.join(args.in_dir, speaker) - if os.path.isdir(spk_dir): - print(spk_dir) - for _ in tqdm( - pool.imap_unordered( - process, - [ - (spk_dir, i, args) - for i in os.listdir(spk_dir) - if i.endswith("wav") - ], - ) - ): - pass diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/is_hrnet_model.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/is_hrnet_model.py deleted file mode 100644 index ced540a782c7b6e5b498d2e345faa95cb4015f4c..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/XMem/inference/interact/fbrs/model/is_hrnet_model.py +++ /dev/null @@ -1,87 +0,0 @@ -import torch -import torch.nn as nn - -from .ops import DistMaps -from .modeling.hrnet_ocr import HighResolutionNet - - -def get_hrnet_model(width=48, ocr_width=256, small=False, norm_radius=260, - use_rgb_conv=True, with_aux_output=False, cpu_dist_maps=False, - norm_layer=nn.BatchNorm2d): - model = DistMapsHRNetModel( - feature_extractor=HighResolutionNet(width=width, ocr_width=ocr_width, small=small, - num_classes=1, norm_layer=norm_layer), - use_rgb_conv=use_rgb_conv, - with_aux_output=with_aux_output, - norm_layer=norm_layer, - norm_radius=norm_radius, - cpu_dist_maps=cpu_dist_maps - ) - - return model - - -class DistMapsHRNetModel(nn.Module): - def __init__(self, feature_extractor, use_rgb_conv=True, with_aux_output=False, - norm_layer=nn.BatchNorm2d, norm_radius=260, cpu_dist_maps=False): - super(DistMapsHRNetModel, self).__init__() - self.with_aux_output = with_aux_output - - if use_rgb_conv: - self.rgb_conv = nn.Sequential( - nn.Conv2d(in_channels=5, out_channels=8, kernel_size=1), - nn.LeakyReLU(negative_slope=0.2), - norm_layer(8), - nn.Conv2d(in_channels=8, out_channels=3, kernel_size=1), - ) - else: - self.rgb_conv = None - - self.dist_maps = DistMaps(norm_radius=norm_radius, spatial_scale=1.0, cpu_mode=cpu_dist_maps) - self.feature_extractor = feature_extractor - - def forward(self, image, points): - coord_features = self.dist_maps(image, points) - - if self.rgb_conv is not None: - x = self.rgb_conv(torch.cat((image, coord_features), dim=1)) - else: - c1, c2 = torch.chunk(coord_features, 2, dim=1) - c3 = torch.ones_like(c1) - coord_features = torch.cat((c1, c2, c3), dim=1) - x = 0.8 * image * coord_features + 0.2 * image - - feature_extractor_out = self.feature_extractor(x) - instance_out = feature_extractor_out[0] - instance_out = nn.functional.interpolate(instance_out, size=image.size()[2:], - mode='bilinear', align_corners=True) - outputs = {'instances': instance_out} - if self.with_aux_output: - instance_aux_out = feature_extractor_out[1] - instance_aux_out = nn.functional.interpolate(instance_aux_out, size=image.size()[2:], - mode='bilinear', align_corners=True) - outputs['instances_aux'] = instance_aux_out - - return outputs - - def load_weights(self, path_to_weights): - current_state_dict = self.state_dict() - new_state_dict = torch.load(path_to_weights) - current_state_dict.update(new_state_dict) - self.load_state_dict(current_state_dict) - - def get_trainable_params(self): - backbone_params = nn.ParameterList() - other_params = nn.ParameterList() - other_params_keys = [] - nonbackbone_keywords = ['rgb_conv', 'aux_head', 'cls_head', 'conv3x3_ocr', 'ocr_distri_head'] - - for name, param in self.named_parameters(): - if param.requires_grad: - if any(x in name for x in nonbackbone_keywords): - other_params.append(param) - other_params_keys.append(name) - else: - backbone_params.append(param) - print('Nonbackbone params:', sorted(other_params_keys)) - return backbone_params, other_params diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/datasets/hrf.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/datasets/hrf.py deleted file mode 100644 index 242d790eb1b83e75cf6b7eaa7a35c674099311ad..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/configs/_base_/datasets/hrf.py +++ /dev/null @@ -1,59 +0,0 @@ -# dataset settings -dataset_type = 'HRFDataset' -data_root = 'data/HRF' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -img_scale = (2336, 3504) -crop_size = (256, 256) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale, - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] - -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type='RepeatDataset', - times=40000, - dataset=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/training', - ann_dir='annotations/training', - pipeline=train_pipeline)), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='images/validation', - ann_dir='annotations/validation', - pipeline=test_pipeline)) diff --git a/spaces/MirageML/shap-e/app.py b/spaces/MirageML/shap-e/app.py deleted file mode 100644 index 50db28f9a0d0d40a5fd6686f69e169b3b6bd8e66..0000000000000000000000000000000000000000 --- a/spaces/MirageML/shap-e/app.py +++ /dev/null @@ -1,236 +0,0 @@ -import os -import gradio as gr -from PIL import Image -import torch -import matplotlib.pyplot as plt -import imageio -import numpy as np -import math -import argparse -import tempfile - -import torch -import base64 -import io -import os -from typing import Union - - -from shap_e.diffusion.sample import sample_latents -from shap_e.diffusion.gaussian_diffusion import diffusion_from_config -from shap_e.models.download import load_model, load_config -from shap_e.util.notebooks import create_pan_cameras, decode_latent_images, decode_latent_mesh - -from shap_e.models.nn.camera import DifferentiableCameraBatch, DifferentiableProjectiveCamera -from shap_e.models.transmitter.base import Transmitter, VectorDecoder -from shap_e.util.collections import AttrDict - -import trimesh - - -state = "" -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - -css = ''' - .instruction{position: absolute; top: 0;right: 0;margin-top: 0px !important} - .arrow{position: absolute;top: 0;right: -110px;margin-top: -8px !important} - #component-4, #component-3, #component-10{min-height: 0} -''' - -def set_state(s): - print(s) - global state - state = s - -def get_state(): - return state - -def to_video(frames: list[Image.Image], fps: int = 5) -> str: - out_file = tempfile.NamedTemporaryFile(suffix='.mp4', delete=False) - writer = imageio.get_writer(out_file.name, format='FFMPEG', fps=fps) - for frame in frames: - writer.append_data(np.asarray(frame)) - writer.close() - return out_file.name - -def generate_3D(input, grid_size=64): - set_state('Entered generate function...') - - # if input is a string, it's a text prompt - xm = load_model('transmitter', device=device) - diffusion = diffusion_from_config(load_config('diffusion')) - batch_size = 4 - - if isinstance(input, np.ndarray): - input = Image.fromarray(input) - - if isinstance(input, Image.Image): - input = prepare_img(input) - model = load_model('image300M', device=device) - guidance_scale = 3.0 - model_kwargs = dict(images=[input] * batch_size) - else: - model = load_model('text300M', device=device) - guidance_scale = 15.0 - model_kwargs = dict(texts=[input] * batch_size) - - print(input) - - latents = sample_latents( - batch_size=batch_size, - model=model, - diffusion=diffusion, - guidance_scale=guidance_scale, - model_kwargs=model_kwargs, - progress=True, - clip_denoised=True, - use_fp16=True, - use_karras=True, - karras_steps=64, - sigma_min=1e-3, - sigma_max=160, - s_churn=0, - ) - - render_mode = 'stf' # you can change this to 'stf' - size = grid_size # this is the size of the renders; higher values take longer to render. - - cameras = create_pan_cameras(size, device) - - with open(f'/tmp/mesh.ply', 'wb') as f: - decode_latent_mesh(xm, latents[0]).tri_mesh().write_ply(f) - - - set_state('Converting to point cloud...') - # pc = sampler.output_to_point_clouds(samples)[0] - - set_state('Converting to mesh...') - # save_ply(pc, 'output/mesh.ply', grid_size) - - set_state('') - - images = decode_latent_images(xm, latents[0], cameras, rendering_mode=render_mode) - - - return ply_to_glb('/tmp/mesh.ply', '/tmp/mesh.glb'), to_video(images), gr.update(value=['/tmp/mesh.glb', '/tmp/mesh.ply'], visible=True) - -def prepare_img(img): - - w, h = img.size - if w > h: - img = img.crop((w - h) / 2, 0, w - (w - h) / 2, h) - else: - img = img.crop((0, (h - w) / 2, w, h - (h - w) / 2)) - - # resize to 256x256 - img = img.resize((256, 256)) - - return img - - -def ply_to_glb(ply_file, glb_file): - mesh = trimesh.load(ply_file) - - # Save the mesh as a glb file using Trimesh - mesh.export(glb_file, file_type='glb') - - return glb_file - - -block = gr.Blocks().queue(max_size=250, concurrency_count=6) -with block: - with gr.Box(): - if(not torch.cuda.is_available()): - top_description = gr.HTML(f''' -
-
- -
-

- Shap-E Web UI -

-
-

- If the Queue is Too Long, Try it on Mirage! -

-
-

- Generate 3D Assets in 1 minute with a prompt or image! - Based on the Shap-E implementation -

-
-

There's only one step left before you can train your model: attribute a T4 GPU to it (via the Settings tab) and run the training below. Other GPUs are not compatible for now. You will be billed by the minute from when you activate the GPU until when it is turned it off.

-
- ''') - else: - top_description = gr.HTML(f''' -
-
- -
-

- Shap-E Web UI -

-
-

- If the Queue is Too Long, Try it on Mirage! -

-
-

- Generate 3D Assets in 1 minute with a prompt or image! - Based on the Shap-E implementation -

-
- ''') - with gr.Row(): - with gr.Column(): - with gr.Tab("Text to 3D"): - gr.Markdown("Uses Stable Diffusion to create an image from the prompt.") - prompt = gr.Textbox(label="Prompt", placeholder="A HD photo of a Corgi") - text_button = gr.Button(label="Generate") - - with gr.Tab("Image to 3D"): - gr.Markdown("Best results with images of objects on an empty background.") - input_image = gr.Image(label="Image") - img_button = gr.Button(label="Generate") - - # with gr.Accordion("Advanced options", open=False): - # model = gr.Radio(["base40M", "base300M", "base1B"], label="Model", value="base1B") - # scale = gr.Slider( - # label="Guidance Scale", minimum=1.0, maximum=10.0, value=3.0, step=0.1 - # ) - - with gr.Column(): - model_gif = gr.Model3D(label="3D Model GIF") - # btn_pc_to_obj = gr.Button(value="Convert to OBJ", visible=False) - model_3d = gr.Model3D(value=None) - file_out = gr.File(label="Files", visible=False) - - if torch.cuda.is_available(): - gr.Examples( - examples=[ - ["a shark"], - ["an avocado"], - ], - inputs=[prompt], - outputs=[model_3d, model_gif, file_out], - fn=generate_3D, - cache_examples=True - ) - gr.Examples( - examples=[ - ["images/pumpkin.png"], - ["images/fantasy_world.png"], - ], - inputs=[input_image], - outputs=[model_3d, model_gif, file_out], - fn=generate_3D, - cache_examples=True - ) - - img_button.click(fn=generate_3D, inputs=[input_image], outputs=[model_3d, model_gif, file_out]) - text_button.click(fn=generate_3D, inputs=[prompt], outputs=[model_3d, model_gif, file_out]) - -block.launch(show_api=False) diff --git a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/psenet/psenet_resnet50_fpnf_600e_icdar2015.py b/spaces/Mountchicken/MAERec-Gradio/configs/textdet/psenet/psenet_resnet50_fpnf_600e_icdar2015.py deleted file mode 100644 index d5610c0dd91a0651cd44b1c1839cb810b57a0c5a..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/configs/textdet/psenet/psenet_resnet50_fpnf_600e_icdar2015.py +++ /dev/null @@ -1,44 +0,0 @@ -_base_ = [ - '_base_psenet_resnet50_fpnf.py', - '../_base_/datasets/icdar2015.py', - '../_base_/default_runtime.py', - '../_base_/schedules/schedule_adam_600e.py', -] - -# optimizer -optim_wrapper = dict(optimizer=dict(lr=1e-4)) -train_cfg = dict(val_interval=40) -param_scheduler = [ - dict(type='MultiStepLR', milestones=[200, 400], end=600), -] - -# dataset settings -icdar2015_textdet_train = _base_.icdar2015_textdet_train -icdar2015_textdet_test = _base_.icdar2015_textdet_test - -# use quadrilaterals for icdar2015 -model = dict( - backbone=dict(style='pytorch'), - det_head=dict(postprocessor=dict(text_repr_type='quad'))) - -# pipeline settings -icdar2015_textdet_train.pipeline = _base_.train_pipeline -icdar2015_textdet_test.pipeline = _base_.test_pipeline - -train_dataloader = dict( - batch_size=16, - num_workers=8, - persistent_workers=False, - sampler=dict(type='DefaultSampler', shuffle=True), - dataset=icdar2015_textdet_train) - -val_dataloader = dict( - batch_size=1, - num_workers=1, - persistent_workers=False, - sampler=dict(type='DefaultSampler', shuffle=False), - dataset=icdar2015_textdet_test) - -test_dataloader = val_dataloader - -auto_scale_lr = dict(base_batch_size=64 * 4) diff --git a/spaces/MrSinan/LFW-MaskedRecogntion/create_mask.py b/spaces/MrSinan/LFW-MaskedRecogntion/create_mask.py deleted file mode 100644 index 8a3db917a04b9d81c73ac0319d555ac8225ba925..0000000000000000000000000000000000000000 --- a/spaces/MrSinan/LFW-MaskedRecogntion/create_mask.py +++ /dev/null @@ -1,118 +0,0 @@ -# Author: aqeelanwar -# Created: 6 July,2020, 12:14 AM -# Email: aqeel.anwar@gatech.edu - -from PIL import ImageColor -import cv2 -import numpy as np - -COLOR = [ - "#fc1c1a", - "#177ABC", - "#94B6D2", - "#A5AB81", - "#DD8047", - "#6b425e", - "#e26d5a", - "#c92c48", - "#6a506d", - "#ffc900", - "#ffffff", - "#000000", - "#49ff00", -] - - -def color_the_mask(mask_image, color, intensity): - assert 0 <= intensity <= 1, "intensity should be between 0 and 1" - RGB_color = ImageColor.getcolor(color, "RGB") - RGB_color = (RGB_color[2], RGB_color[1], RGB_color[0]) - orig_shape = mask_image.shape - bit_mask = mask_image[:, :, 3] - mask_image = mask_image[:, :, 0:3] - - color_image = np.full(mask_image.shape, RGB_color, np.uint8) - mask_color = cv2.addWeighted(mask_image, 1 - intensity, color_image, intensity, 0) - mask_color = cv2.bitwise_and(mask_color, mask_color, mask=bit_mask) - colored_mask = np.zeros(orig_shape, dtype=np.uint8) - colored_mask[:, :, 0:3] = mask_color - colored_mask[:, :, 3] = bit_mask - return colored_mask - - -def texture_the_mask(mask_image, texture_path, intensity): - assert 0 <= intensity <= 1, "intensity should be between 0 and 1" - orig_shape = mask_image.shape - bit_mask = mask_image[:, :, 3] - mask_image = mask_image[:, :, 0:3] - texture_image = cv2.imread(texture_path) - texture_image = cv2.resize(texture_image, (orig_shape[1], orig_shape[0])) - - mask_texture = cv2.addWeighted( - mask_image, 1 - intensity, texture_image, intensity, 0 - ) - mask_texture = cv2.bitwise_and(mask_texture, mask_texture, mask=bit_mask) - textured_mask = np.zeros(orig_shape, dtype=np.uint8) - textured_mask[:, :, 0:3] = mask_texture - textured_mask[:, :, 3] = bit_mask - - return textured_mask - - - -# cloth_mask = cv2.imread("masks/templates/cloth.png", cv2.IMREAD_UNCHANGED) -# # cloth_mask = color_the_mask(cloth_mask, color=COLOR[0], intensity=0.5) -# path = "masks/textures" -# path, dir, files = os.walk(path).__next__() -# first_frame = True -# col_limit = 6 -# i = 0 -# # img_concat_row=[] -# img_concat = [] -# # for f in files: -# # if "._" not in f: -# # print(f) -# # i += 1 -# # texture_image = cv2.imread(os.path.join(path, f)) -# # m = texture_the_mask(cloth_mask, texture_image, intensity=0.5) -# # if first_frame: -# # img_concat_row = m -# # first_frame = False -# # else: -# # img_concat_row = cv2.hconcat((img_concat_row, m)) -# # -# # if i % col_limit == 0: -# # if len(img_concat) > 0: -# # img_concat = cv2.vconcat((img_concat, img_concat_row)) -# # else: -# # img_concat = img_concat_row -# # first_frame = True -# -# ## COlor the mask -# thresholds = np.arange(0.1,0.9,0.05) -# for intensity in thresholds: -# c=COLOR[2] -# # intensity = 0.5 -# if "._" not in c: -# print(intensity) -# i += 1 -# # texture_image = cv2.imread(os.path.join(path, f)) -# m = color_the_mask(cloth_mask, c, intensity=intensity) -# if first_frame: -# img_concat_row = m -# first_frame = False -# else: -# img_concat_row = cv2.hconcat((img_concat_row, m)) -# -# if i % col_limit == 0: -# if len(img_concat) > 0: -# img_concat = cv2.vconcat((img_concat, img_concat_row)) -# else: -# img_concat = img_concat_row -# first_frame = True -# -# -# cv2.imshow("k", img_concat) -# cv2.imwrite("combine_N95_left.png", img_concat) -# cv2.waitKey(0) -# cc = 1 diff --git a/spaces/NCTCMumbai/NCTC/models/research/compression/entropy_coder/__init__.py b/spaces/NCTCMumbai/NCTC/models/research/compression/entropy_coder/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OAOA/DifFace/basicsr/archs/stylegan2_bilinear_arch.py b/spaces/OAOA/DifFace/basicsr/archs/stylegan2_bilinear_arch.py deleted file mode 100644 index 2395170411f9d11f2798ac03cf6ec6eb32fe5e43..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/archs/stylegan2_bilinear_arch.py +++ /dev/null @@ -1,614 +0,0 @@ -import math -import random -import torch -from torch import nn -from torch.nn import functional as F - -from basicsr.ops.fused_act import FusedLeakyReLU, fused_leaky_relu -from basicsr.utils.registry import ARCH_REGISTRY - - -class NormStyleCode(nn.Module): - - def forward(self, x): - """Normalize the style codes. - - Args: - x (Tensor): Style codes with shape (b, c). - - Returns: - Tensor: Normalized tensor. - """ - return x * torch.rsqrt(torch.mean(x**2, dim=1, keepdim=True) + 1e-8) - - -class EqualLinear(nn.Module): - """Equalized Linear as StyleGAN2. - - Args: - in_channels (int): Size of each sample. - out_channels (int): Size of each output sample. - bias (bool): If set to ``False``, the layer will not learn an additive - bias. Default: ``True``. - bias_init_val (float): Bias initialized value. Default: 0. - lr_mul (float): Learning rate multiplier. Default: 1. - activation (None | str): The activation after ``linear`` operation. - Supported: 'fused_lrelu', None. Default: None. - """ - - def __init__(self, in_channels, out_channels, bias=True, bias_init_val=0, lr_mul=1, activation=None): - super(EqualLinear, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.lr_mul = lr_mul - self.activation = activation - if self.activation not in ['fused_lrelu', None]: - raise ValueError(f'Wrong activation value in EqualLinear: {activation}' - "Supported ones are: ['fused_lrelu', None].") - self.scale = (1 / math.sqrt(in_channels)) * lr_mul - - self.weight = nn.Parameter(torch.randn(out_channels, in_channels).div_(lr_mul)) - if bias: - self.bias = nn.Parameter(torch.zeros(out_channels).fill_(bias_init_val)) - else: - self.register_parameter('bias', None) - - def forward(self, x): - if self.bias is None: - bias = None - else: - bias = self.bias * self.lr_mul - if self.activation == 'fused_lrelu': - out = F.linear(x, self.weight * self.scale) - out = fused_leaky_relu(out, bias) - else: - out = F.linear(x, self.weight * self.scale, bias=bias) - return out - - def __repr__(self): - return (f'{self.__class__.__name__}(in_channels={self.in_channels}, ' - f'out_channels={self.out_channels}, bias={self.bias is not None})') - - -class ModulatedConv2d(nn.Module): - """Modulated Conv2d used in StyleGAN2. - - There is no bias in ModulatedConv2d. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - kernel_size (int): Size of the convolving kernel. - num_style_feat (int): Channel number of style features. - demodulate (bool): Whether to demodulate in the conv layer. - Default: True. - sample_mode (str | None): Indicating 'upsample', 'downsample' or None. - Default: None. - eps (float): A value added to the denominator for numerical stability. - Default: 1e-8. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - num_style_feat, - demodulate=True, - sample_mode=None, - eps=1e-8, - interpolation_mode='bilinear'): - super(ModulatedConv2d, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.demodulate = demodulate - self.sample_mode = sample_mode - self.eps = eps - self.interpolation_mode = interpolation_mode - if self.interpolation_mode == 'nearest': - self.align_corners = None - else: - self.align_corners = False - - self.scale = 1 / math.sqrt(in_channels * kernel_size**2) - # modulation inside each modulated conv - self.modulation = EqualLinear( - num_style_feat, in_channels, bias=True, bias_init_val=1, lr_mul=1, activation=None) - - self.weight = nn.Parameter(torch.randn(1, out_channels, in_channels, kernel_size, kernel_size)) - self.padding = kernel_size // 2 - - def forward(self, x, style): - """Forward function. - - Args: - x (Tensor): Tensor with shape (b, c, h, w). - style (Tensor): Tensor with shape (b, num_style_feat). - - Returns: - Tensor: Modulated tensor after convolution. - """ - b, c, h, w = x.shape # c = c_in - # weight modulation - style = self.modulation(style).view(b, 1, c, 1, 1) - # self.weight: (1, c_out, c_in, k, k); style: (b, 1, c, 1, 1) - weight = self.scale * self.weight * style # (b, c_out, c_in, k, k) - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + self.eps) - weight = weight * demod.view(b, self.out_channels, 1, 1, 1) - - weight = weight.view(b * self.out_channels, c, self.kernel_size, self.kernel_size) - - if self.sample_mode == 'upsample': - x = F.interpolate(x, scale_factor=2, mode=self.interpolation_mode, align_corners=self.align_corners) - elif self.sample_mode == 'downsample': - x = F.interpolate(x, scale_factor=0.5, mode=self.interpolation_mode, align_corners=self.align_corners) - - b, c, h, w = x.shape - x = x.view(1, b * c, h, w) - # weight: (b*c_out, c_in, k, k), groups=b - out = F.conv2d(x, weight, padding=self.padding, groups=b) - out = out.view(b, self.out_channels, *out.shape[2:4]) - - return out - - def __repr__(self): - return (f'{self.__class__.__name__}(in_channels={self.in_channels}, ' - f'out_channels={self.out_channels}, ' - f'kernel_size={self.kernel_size}, ' - f'demodulate={self.demodulate}, sample_mode={self.sample_mode})') - - -class StyleConv(nn.Module): - """Style conv. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - kernel_size (int): Size of the convolving kernel. - num_style_feat (int): Channel number of style features. - demodulate (bool): Whether demodulate in the conv layer. Default: True. - sample_mode (str | None): Indicating 'upsample', 'downsample' or None. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - num_style_feat, - demodulate=True, - sample_mode=None, - interpolation_mode='bilinear'): - super(StyleConv, self).__init__() - self.modulated_conv = ModulatedConv2d( - in_channels, - out_channels, - kernel_size, - num_style_feat, - demodulate=demodulate, - sample_mode=sample_mode, - interpolation_mode=interpolation_mode) - self.weight = nn.Parameter(torch.zeros(1)) # for noise injection - self.activate = FusedLeakyReLU(out_channels) - - def forward(self, x, style, noise=None): - # modulate - out = self.modulated_conv(x, style) - # noise injection - if noise is None: - b, _, h, w = out.shape - noise = out.new_empty(b, 1, h, w).normal_() - out = out + self.weight * noise - # activation (with bias) - out = self.activate(out) - return out - - -class ToRGB(nn.Module): - """To RGB from features. - - Args: - in_channels (int): Channel number of input. - num_style_feat (int): Channel number of style features. - upsample (bool): Whether to upsample. Default: True. - """ - - def __init__(self, in_channels, num_style_feat, upsample=True, interpolation_mode='bilinear'): - super(ToRGB, self).__init__() - self.upsample = upsample - self.interpolation_mode = interpolation_mode - if self.interpolation_mode == 'nearest': - self.align_corners = None - else: - self.align_corners = False - self.modulated_conv = ModulatedConv2d( - in_channels, - 3, - kernel_size=1, - num_style_feat=num_style_feat, - demodulate=False, - sample_mode=None, - interpolation_mode=interpolation_mode) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, x, style, skip=None): - """Forward function. - - Args: - x (Tensor): Feature tensor with shape (b, c, h, w). - style (Tensor): Tensor with shape (b, num_style_feat). - skip (Tensor): Base/skip tensor. Default: None. - - Returns: - Tensor: RGB images. - """ - out = self.modulated_conv(x, style) - out = out + self.bias - if skip is not None: - if self.upsample: - skip = F.interpolate( - skip, scale_factor=2, mode=self.interpolation_mode, align_corners=self.align_corners) - out = out + skip - return out - - -class ConstantInput(nn.Module): - """Constant input. - - Args: - num_channel (int): Channel number of constant input. - size (int): Spatial size of constant input. - """ - - def __init__(self, num_channel, size): - super(ConstantInput, self).__init__() - self.weight = nn.Parameter(torch.randn(1, num_channel, size, size)) - - def forward(self, batch): - out = self.weight.repeat(batch, 1, 1, 1) - return out - - -@ARCH_REGISTRY.register(suffix='basicsr') -class StyleGAN2GeneratorBilinear(nn.Module): - """StyleGAN2 Generator. - - Args: - out_size (int): The spatial size of outputs. - num_style_feat (int): Channel number of style features. Default: 512. - num_mlp (int): Layer number of MLP style layers. Default: 8. - channel_multiplier (int): Channel multiplier for large networks of - StyleGAN2. Default: 2. - lr_mlp (float): Learning rate multiplier for mlp layers. Default: 0.01. - narrow (float): Narrow ratio for channels. Default: 1.0. - """ - - def __init__(self, - out_size, - num_style_feat=512, - num_mlp=8, - channel_multiplier=2, - lr_mlp=0.01, - narrow=1, - interpolation_mode='bilinear'): - super(StyleGAN2GeneratorBilinear, self).__init__() - # Style MLP layers - self.num_style_feat = num_style_feat - style_mlp_layers = [NormStyleCode()] - for i in range(num_mlp): - style_mlp_layers.append( - EqualLinear( - num_style_feat, num_style_feat, bias=True, bias_init_val=0, lr_mul=lr_mlp, - activation='fused_lrelu')) - self.style_mlp = nn.Sequential(*style_mlp_layers) - - channels = { - '4': int(512 * narrow), - '8': int(512 * narrow), - '16': int(512 * narrow), - '32': int(512 * narrow), - '64': int(256 * channel_multiplier * narrow), - '128': int(128 * channel_multiplier * narrow), - '256': int(64 * channel_multiplier * narrow), - '512': int(32 * channel_multiplier * narrow), - '1024': int(16 * channel_multiplier * narrow) - } - self.channels = channels - - self.constant_input = ConstantInput(channels['4'], size=4) - self.style_conv1 = StyleConv( - channels['4'], - channels['4'], - kernel_size=3, - num_style_feat=num_style_feat, - demodulate=True, - sample_mode=None, - interpolation_mode=interpolation_mode) - self.to_rgb1 = ToRGB(channels['4'], num_style_feat, upsample=False, interpolation_mode=interpolation_mode) - - self.log_size = int(math.log(out_size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - self.num_latent = self.log_size * 2 - 2 - - self.style_convs = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channels = channels['4'] - # noise - for layer_idx in range(self.num_layers): - resolution = 2**((layer_idx + 5) // 2) - shape = [1, 1, resolution, resolution] - self.noises.register_buffer(f'noise{layer_idx}', torch.randn(*shape)) - # style convs and to_rgbs - for i in range(3, self.log_size + 1): - out_channels = channels[f'{2**i}'] - self.style_convs.append( - StyleConv( - in_channels, - out_channels, - kernel_size=3, - num_style_feat=num_style_feat, - demodulate=True, - sample_mode='upsample', - interpolation_mode=interpolation_mode)) - self.style_convs.append( - StyleConv( - out_channels, - out_channels, - kernel_size=3, - num_style_feat=num_style_feat, - demodulate=True, - sample_mode=None, - interpolation_mode=interpolation_mode)) - self.to_rgbs.append( - ToRGB(out_channels, num_style_feat, upsample=True, interpolation_mode=interpolation_mode)) - in_channels = out_channels - - def make_noise(self): - """Make noise for noise injection.""" - device = self.constant_input.weight.device - noises = [torch.randn(1, 1, 4, 4, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2**i, 2**i, device=device)) - - return noises - - def get_latent(self, x): - return self.style_mlp(x) - - def mean_latent(self, num_latent): - latent_in = torch.randn(num_latent, self.num_style_feat, device=self.constant_input.weight.device) - latent = self.style_mlp(latent_in).mean(0, keepdim=True) - return latent - - def forward(self, - styles, - input_is_latent=False, - noise=None, - randomize_noise=True, - truncation=1, - truncation_latent=None, - inject_index=None, - return_latents=False): - """Forward function for StyleGAN2Generator. - - Args: - styles (list[Tensor]): Sample codes of styles. - input_is_latent (bool): Whether input is latent style. - Default: False. - noise (Tensor | None): Input noise or None. Default: None. - randomize_noise (bool): Randomize noise, used when 'noise' is - False. Default: True. - truncation (float): TODO. Default: 1. - truncation_latent (Tensor | None): TODO. Default: None. - inject_index (int | None): The injection index for mixing noise. - Default: None. - return_latents (bool): Whether to return style latents. - Default: False. - """ - # style codes -> latents with Style MLP layer - if not input_is_latent: - styles = [self.style_mlp(s) for s in styles] - # noises - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers # for each style conv layer - else: # use the stored noise - noise = [getattr(self.noises, f'noise{i}') for i in range(self.num_layers)] - # style truncation - if truncation < 1: - style_truncation = [] - for style in styles: - style_truncation.append(truncation_latent + truncation * (style - truncation_latent)) - styles = style_truncation - # get style latent with injection - if len(styles) == 1: - inject_index = self.num_latent - - if styles[0].ndim < 3: - # repeat latent code for all the layers - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - else: # used for encoder with different latent code for each layer - latent = styles[0] - elif len(styles) == 2: # mixing noises - if inject_index is None: - inject_index = random.randint(1, self.num_latent - 1) - latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1) - latent = torch.cat([latent1, latent2], 1) - - # main generation - out = self.constant_input(latent.shape[0]) - out = self.style_conv1(out, latent[:, 0], noise=noise[0]) - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip(self.style_convs[::2], self.style_convs[1::2], noise[1::2], - noise[2::2], self.to_rgbs): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - i += 2 - - image = skip - - if return_latents: - return image, latent - else: - return image, None - - -class ScaledLeakyReLU(nn.Module): - """Scaled LeakyReLU. - - Args: - negative_slope (float): Negative slope. Default: 0.2. - """ - - def __init__(self, negative_slope=0.2): - super(ScaledLeakyReLU, self).__init__() - self.negative_slope = negative_slope - - def forward(self, x): - out = F.leaky_relu(x, negative_slope=self.negative_slope) - return out * math.sqrt(2) - - -class EqualConv2d(nn.Module): - """Equalized Linear as StyleGAN2. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - kernel_size (int): Size of the convolving kernel. - stride (int): Stride of the convolution. Default: 1 - padding (int): Zero-padding added to both sides of the input. - Default: 0. - bias (bool): If ``True``, adds a learnable bias to the output. - Default: ``True``. - bias_init_val (float): Bias initialized value. Default: 0. - """ - - def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, bias=True, bias_init_val=0): - super(EqualConv2d, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.stride = stride - self.padding = padding - self.scale = 1 / math.sqrt(in_channels * kernel_size**2) - - self.weight = nn.Parameter(torch.randn(out_channels, in_channels, kernel_size, kernel_size)) - if bias: - self.bias = nn.Parameter(torch.zeros(out_channels).fill_(bias_init_val)) - else: - self.register_parameter('bias', None) - - def forward(self, x): - out = F.conv2d( - x, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return (f'{self.__class__.__name__}(in_channels={self.in_channels}, ' - f'out_channels={self.out_channels}, ' - f'kernel_size={self.kernel_size},' - f' stride={self.stride}, padding={self.padding}, ' - f'bias={self.bias is not None})') - - -class ConvLayer(nn.Sequential): - """Conv Layer used in StyleGAN2 Discriminator. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - kernel_size (int): Kernel size. - downsample (bool): Whether downsample by a factor of 2. - Default: False. - bias (bool): Whether with bias. Default: True. - activate (bool): Whether use activateion. Default: True. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - downsample=False, - bias=True, - activate=True, - interpolation_mode='bilinear'): - layers = [] - self.interpolation_mode = interpolation_mode - # downsample - if downsample: - if self.interpolation_mode == 'nearest': - self.align_corners = None - else: - self.align_corners = False - - layers.append( - torch.nn.Upsample(scale_factor=0.5, mode=interpolation_mode, align_corners=self.align_corners)) - stride = 1 - self.padding = kernel_size // 2 - # conv - layers.append( - EqualConv2d( - in_channels, out_channels, kernel_size, stride=stride, padding=self.padding, bias=bias - and not activate)) - # activation - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channels)) - else: - layers.append(ScaledLeakyReLU(0.2)) - - super(ConvLayer, self).__init__(*layers) - - -class ResBlock(nn.Module): - """Residual block used in StyleGAN2 Discriminator. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - """ - - def __init__(self, in_channels, out_channels, interpolation_mode='bilinear'): - super(ResBlock, self).__init__() - - self.conv1 = ConvLayer(in_channels, in_channels, 3, bias=True, activate=True) - self.conv2 = ConvLayer( - in_channels, - out_channels, - 3, - downsample=True, - interpolation_mode=interpolation_mode, - bias=True, - activate=True) - self.skip = ConvLayer( - in_channels, - out_channels, - 1, - downsample=True, - interpolation_mode=interpolation_mode, - bias=False, - activate=False) - - def forward(self, x): - out = self.conv1(x) - out = self.conv2(out) - skip = self.skip(x) - out = (out + skip) / math.sqrt(2) - return out diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/roberta/commonsense_qa/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/roberta/commonsense_qa/__init__.py deleted file mode 100644 index 42d21f35eb3dd33a053dcf0edd5eadd2dff11294..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/roberta/commonsense_qa/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import commonsense_qa_task # noqa diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/speech2unit/README.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/speech2unit/README.md deleted file mode 100644 index 1a3d131ec165f12e37906420fc2c284a7223bda2..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/textless_nlp/gslm/speech2unit/README.md +++ /dev/null @@ -1,71 +0,0 @@ -# Speech to Unit Model (speech2unit) - -## Acoustic Model -For quantizing speech we learn a K-means clustering over acoustic representations for which we either use Log-Mel Filterbank or pretrained acoustic representation models. For using pretrained models, please download from their respective locations linked below. -* [Modified CPC](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/cpc_big_ll6kh_top_ctc.pt) -* [HuBERT-Base](https://dl.fbaipublicfiles.com/hubert/hubert_base_ls960.pt) -* [Wav2Vec 2.0-Base](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_vox_new.pt) - -## Quantization Model -You can download pretrained quantized model from the list below. - -K-Means Model | Download Link -|-|- -Log Mel Filterbank + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/km50/km.bin) -Log Mel Filterbank + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/km100/km.bin) -Log Mel Filterbank + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/km200/km.bin) -Log Mel Filterbank + KM500 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/logmel/km500/km.bin) -Modified CPC + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/km50/km.bin) -Modified CPC + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/km100/km.bin) -Modified CPC + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/km200/km.bin) -Modified CPC + KM500 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/cpc/km500/km.bin) -HuBERT Base + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/km50/km.bin) -HuBERT Base + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/km100/km.bin) -HuBERT Base + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/km200/km.bin) -HuBERT Base + KM500 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/hubert/km500/km.bin) -wav2vec 2.0 Large + KM50 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/km50/km.bin) -wav2vec 2.0 Large + KM100 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/km100/km.bin) -wav2vec 2.0 Large + KM200 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/km200/km.bin) -wav2vec 2.0 Large + KM500 | [download](https://dl.fbaipublicfiles.com/textless_nlp/gslm/w2v2/km500/km.bin) - -### Quantization -For quantizing speech with a given acoustic representation, please follow the steps below. -1. Learn K-means clustering model -``` -N_CLUSTERS= -TYPE= -CKPT_PATH= -LAYER= -MANIFEST= -KM_MODEL_PATH= - -PYTHONPATH=. python examples/textless_nlp/gslm/speech2unit/clustering/cluster_kmeans.py \ - --num_clusters $N_CLUSTERS \ - --feature_type $TYPE \ - --checkpoint_path $CKPT_PATH \ - --layer $LAYER \ - --manifest_path $MANIFEST \ - --out_kmeans_model_path $KM_MODEL_PATH -``` -2. Quantize using the learned clusters -``` -MANIFEST= -OUT_QUANTIZED_FILE= - -python examples/textless_nlp/gslm/speech2unit/clustering/del/quantize_with_kmeans.py \ - --feature_type $TYPE \ - --kmeans_model_path $KM_MODEL_PATH \ - --checkpoint_path $CKPT_PATH \ - --layer $LAYER \ - --manifest_path $MANIFEST \ - --out_quantized_file_path $OUT_QUANTIZED_FILE \ - --extension ".flac" -``` - -Note about the manifest file is a file with paths and length of input audio files. The format of the file is as follows: -``` - -\t -\t -... -``` \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/lr_scheduler/triangular_lr_scheduler.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/lr_scheduler/triangular_lr_scheduler.py deleted file mode 100644 index bfe2a0d381f28525f90ee120b31a69210338eb1b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/optim/lr_scheduler/triangular_lr_scheduler.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field -from typing import List - -from omegaconf import II - -from fairseq.dataclass import FairseqDataclass -from fairseq.optim.lr_scheduler import FairseqLRScheduler, register_lr_scheduler - - -@dataclass -class TriangularLRScheduleConfig(FairseqDataclass): - max_lr: float = field( - default="???", metadata={"help": "max learning rate, must be more than cfg.lr"} - ) - lr_period_updates: float = field( - default=5000, - metadata={"help": "initial number of updates per period (cycle length)"}, - ) - lr_shrink: float = field( - default=0.1, metadata={"help": "shrink factor for annealing"} - ) - shrink_min: bool = field( - default=False, metadata={"help": "if set, also shrinks min lr"} - ) - lr: List[float] = II("optimization.lr") - - -@register_lr_scheduler("triangular", dataclass=TriangularLRScheduleConfig) -class TriangularLRSchedule(FairseqLRScheduler): - """Assign LR based on a triangular cyclical schedule. - - See https://arxiv.org/pdf/1506.01186.pdf for details. - """ - - def __init__(self, cfg: TriangularLRScheduleConfig, optimizer): - super().__init__(cfg, optimizer) - if len(cfg.lr) > 1: - raise ValueError( - "Cannot use a fixed learning rate schedule with triangular." - " Consider --lr-scheduler=fixed instead." - ) - - lr = cfg.lr[0] - - assert cfg.max_lr > lr, "max_lr must be more than lr" - self.min_lr = lr - self.max_lr = cfg.max_lr - self.stepsize = cfg.lr_period_updates // 2 - self.lr_shrink = cfg.lr_shrink - self.shrink_min = cfg.shrink_min - - # initial learning rate - self.lr = self.min_lr - self.optimizer.set_lr(self.lr) - - def step(self, epoch, val_loss=None): - """Update the learning rate at the end of the given epoch.""" - super().step(epoch, val_loss) - # we don't change the learning rate at epoch boundaries - return self.optimizer.get_lr() - - def step_update(self, num_updates): - """Update the learning rate after each update.""" - cycle = math.floor(num_updates / (2 * self.stepsize)) - - lr_shrink = self.lr_shrink ** cycle - max_lr = self.max_lr * lr_shrink - if self.shrink_min: - min_lr = self.min_lr * lr_shrink - else: - min_lr = self.min_lr - - x = abs(num_updates / self.stepsize - 2 * (cycle + 1) + 1) - self.lr = min_lr + (max_lr - min_lr) * max(0, (1 - x)) - - self.optimizer.set_lr(self.lr) - return self.lr diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/sacrebleu.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/sacrebleu.sh deleted file mode 100644 index c10bf2b76ea032deabab6f5c9d8a3e1e884f1642..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/scripts/sacrebleu.sh +++ /dev/null @@ -1,27 +0,0 @@ -#!/bin/bash - -if [ $# -ne 4 ]; then - echo "usage: $0 TESTSET SRCLANG TGTLANG GEN" - exit 1 -fi - -TESTSET=$1 -SRCLANG=$2 -TGTLANG=$3 - -GEN=$4 - -if ! command -v sacremoses &> /dev/null -then - echo "sacremoses could not be found, please install with: pip install sacremoses" - exit -fi - -grep ^H $GEN \ -| sed 's/^H\-//' \ -| sort -n -k 1 \ -| cut -f 3 \ -| sacremoses detokenize \ -> $GEN.sorted.detok - -sacrebleu --test-set $TESTSET --language-pair "${SRCLANG}-${TGTLANG}" < $GEN.sorted.detok diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/lightconv.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/lightconv.py deleted file mode 100644 index 4edfe359379bc2445c1ae1ada04bd34ca4a32798..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/lightconv.py +++ /dev/null @@ -1,1019 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq import utils -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderDecoderModel, - FairseqIncrementalDecoder, - register_model, - register_model_architecture, -) -from fairseq.modules import ( - AdaptiveSoftmax, - DynamicConv, - FairseqDropout, - LayerNorm, - LightweightConv, - MultiheadAttention, - PositionalEmbedding, -) -from fairseq.utils import safe_hasattr - - -@register_model("lightconv") -class LightConvModel(FairseqEncoderDecoderModel): - """ - LightConv and DynamicConv model from `"Pay Less Attention with Lightweight and Dynamic Convolutions" (Wu, et al, 2019) - `_. - To use LightConv please set ``--encoder-conv-type lightweight --decoder-conv-type lightweight`` - To use DynamicConv please set ``--encoder-conv-type dynamic --decoder-conv-type dynamic`` - - Args: - encoder (LightConvEncoder): the encoder - decoder (LightConvDecoder): the decoder - - The LightConv model provides the following named architectures and - command-line arguments: - - .. argparse:: - :ref: fairseq.models.lightconv_parser - :prog: - """ - - @classmethod - def hub_models(cls): - # fmt: off - - def moses_subword(path): - return { - 'path': path, - 'tokenizer': 'moses', - 'bpe': 'subword_nmt', - } - - return { - 'lightconv.no_glu.iwslt14.de-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.lightconv.tar.gz'), - 'dynamicconv.no_glu.iwslt14.de-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/iwslt14.de-en.dynamicconv.tar.gz'), - 'lightconv.no_glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv.tar.gz'), - 'dynamicconv.no_glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv.tar.gz'), - 'lightconv.glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt16.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt17.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt17.en-de': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt16.en-de.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt14.en-fr': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt14.en-fr.joined-dict.dynamicconv-glu.tar.gz'), - 'lightconv.glu.wmt17.zh-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.lightconv-glu.tar.gz'), - 'dynamicconv.glu.wmt17.zh-en': moses_subword('https://dl.fbaipublicfiles.com/fairseq/models/dynamicconv/wmt17.zh-en.dynamicconv-glu.tar.gz'), - } - # fmt: on - - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--relu-dropout", - type=float, - metavar="D", - help="dropout probability after ReLU in FFN", - ) - parser.add_argument( - "--input-dropout", - type=float, - metavar="D", - help="dropout probability of the inputs", - ) - parser.add_argument( - "--encoder-embed-path", - type=str, - metavar="STR", - help="path to pre-trained encoder embedding", - ) - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-conv-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="num encoder layers" - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads or LightConv/DynamicConv heads", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - parser.add_argument( - "--encoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the encoder", - ) - parser.add_argument( - "--decoder-embed-path", - type=str, - metavar="STR", - help="path to pre-trained decoder embedding", - ) - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-conv-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads or LightConv/DynamicConv heads", - ) - parser.add_argument( - "--decoder-learned-pos", - action="store_true", - help="use learned positional embeddings in the decoder", - ) - parser.add_argument( - "--decoder-normalize-before", - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--share-decoder-input-output-embed", - action="store_true", - help="share decoder input and output embeddings", - ) - parser.add_argument( - "--share-all-embeddings", - action="store_true", - help="share encoder, decoder and output embeddings" - " (requires shared dictionary and embed dim)", - ) - parser.add_argument( - "--adaptive-softmax-cutoff", - metavar="EXPR", - help="comma separated list of adaptive softmax cutoff points. " - "Must be used with adaptive_loss criterion", - ), - parser.add_argument( - "--adaptive-softmax-dropout", - type=float, - metavar="D", - help="sets adaptive softmax dropout for the tail projections", - ) - - """LightConv and DynamicConv arguments""" - parser.add_argument( - "--encoder-kernel-size-list", - type=lambda x: utils.eval_str_list(x, int), - help='list of kernel size (default: "[3,7,15,31,31,31,31]")', - ) - parser.add_argument( - "--decoder-kernel-size-list", - type=lambda x: utils.eval_str_list(x, int), - help='list of kernel size (default: "[3,7,15,31,31,31]")', - ) - parser.add_argument( - "--encoder-glu", type=utils.eval_bool, help="glu after in proj" - ) - parser.add_argument( - "--decoder-glu", type=utils.eval_bool, help="glu after in proj" - ) - parser.add_argument( - "--encoder-conv-type", - default="dynamic", - type=str, - choices=["dynamic", "lightweight"], - help="type of convolution", - ) - parser.add_argument( - "--decoder-conv-type", - default="dynamic", - type=str, - choices=["dynamic", "lightweight"], - help="type of convolution", - ) - parser.add_argument("--weight-softmax", default=True, type=utils.eval_bool) - parser.add_argument( - "--weight-dropout", - type=float, - metavar="D", - help="dropout probability for conv weights", - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - if not safe_hasattr(args, "max_source_positions"): - args.max_source_positions = 1024 - if not safe_hasattr(args, "max_target_positions"): - args.max_target_positions = 1024 - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - - def build_embedding(dictionary, embed_dim, path=None): - num_embeddings = len(dictionary) - padding_idx = dictionary.pad() - emb = Embedding(num_embeddings, embed_dim, padding_idx) - # if provided, load from preloaded dictionaries - if path: - embed_dict = utils.parse_embedding(path) - utils.load_embedding(embed_dict, dictionary, emb) - return emb - - if args.share_all_embeddings: - if src_dict != tgt_dict: - raise RuntimeError( - "--share-all-embeddings requires a joined dictionary" - ) - if args.encoder_embed_dim != args.decoder_embed_dim: - raise RuntimeError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if args.decoder_embed_path and ( - args.decoder_embed_path != args.encoder_embed_path - ): - raise RuntimeError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - encoder_embed_tokens = build_embedding( - src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = encoder_embed_tokens - args.share_decoder_input_output_embed = True - else: - encoder_embed_tokens = build_embedding( - src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = build_embedding( - tgt_dict, args.decoder_embed_dim, args.decoder_embed_path - ) - - encoder = LightConvEncoder(args, src_dict, encoder_embed_tokens) - decoder = LightConvDecoder(args, tgt_dict, decoder_embed_tokens) - return LightConvModel(encoder, decoder) - - -class LightConvEncoder(FairseqEncoder): - """ - LightConv encoder consisting of *args.encoder_layers* layers. Each layer - is a :class:`LightConvEncoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): encoding dictionary - embed_tokens (torch.nn.Embedding): input embedding - """ - - def __init__(self, args, dictionary, embed_tokens): - super().__init__(dictionary) - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - - embed_dim = embed_tokens.embedding_dim - self.padding_idx = embed_tokens.padding_idx - self.max_source_positions = args.max_source_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) - self.embed_positions = ( - PositionalEmbedding( - args.max_source_positions, - embed_dim, - self.padding_idx, - learned=args.encoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - LightConvEncoderLayer( - args, kernel_size=args.encoder_kernel_size_list[i] - ) - for i in range(args.encoder_layers) - ] - ) - self.register_buffer("version", torch.Tensor([2])) - self.normalize = args.encoder_normalize_before - if self.normalize: - self.layer_norm = LayerNorm(embed_dim) - - def forward(self, src_tokens, **unused): - """ - Args: - src_tokens (LongTensor): tokens in the source language of shape - `(batch, src_len)` - - Returns: - dict: - - **encoder_out** (Tensor): the last encoder layer's output of - shape `(src_len, batch, embed_dim)` - - **encoder_padding_mask** (ByteTensor): the positions of - padding elements of shape `(batch, src_len)` - """ - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(src_tokens) - if self.embed_positions is not None: - x += self.embed_positions(src_tokens) - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - - # compute padding mask - encoder_padding_mask = src_tokens.eq(self.padding_idx) - if not encoder_padding_mask.any(): - encoder_padding_mask = None - - # encoder layers - for layer in self.layers: - x = layer(x, encoder_padding_mask) - - if self.normalize: - x = self.layer_norm(x) - - return { - "encoder_out": x, # T x B x C - "encoder_padding_mask": encoder_padding_mask, # B x T - } - - def reorder_encoder_out(self, encoder_out, new_order): - """ - Reorder encoder output according to *new_order*. - - Args: - encoder_out: output from the ``forward()`` method - new_order (LongTensor): desired order - - Returns: - *encoder_out* rearranged according to *new_order* - """ - if encoder_out["encoder_out"] is not None: - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - if encoder_out["encoder_padding_mask"] is not None: - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(0, new_order) - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - if self.embed_positions is None: - return self.max_source_positions - return min(self.max_source_positions, self.embed_positions.max_positions) - - -class LightConvDecoder(FairseqIncrementalDecoder): - """ - LightConv decoder consisting of *args.decoder_layers* layers. Each layer - is a :class:`LightConvDecoderLayer`. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): decoding dictionary - embed_tokens (torch.nn.Embedding): output embedding - no_encoder_attn (bool, optional): whether to attend to encoder outputs. - Default: ``False`` - """ - - def __init__( - self, args, dictionary, embed_tokens, no_encoder_attn=False, final_norm=True - ): - super().__init__(dictionary) - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.share_input_output_embed = args.share_decoder_input_output_embed - - input_embed_dim = embed_tokens.embedding_dim - embed_dim = args.decoder_embed_dim - output_embed_dim = args.decoder_output_dim - - padding_idx = embed_tokens.padding_idx - self.max_target_positions = args.max_target_positions - - self.embed_tokens = embed_tokens - self.embed_scale = math.sqrt(embed_dim) # todo: try with input_embed_dim - - self.project_in_dim = ( - Linear(input_embed_dim, embed_dim, bias=False) - if embed_dim != input_embed_dim - else None - ) - - self.embed_positions = ( - PositionalEmbedding( - args.max_target_positions, - embed_dim, - padding_idx, - learned=args.decoder_learned_pos, - ) - if not args.no_token_positional_embeddings - else None - ) - - self.layers = nn.ModuleList([]) - self.layers.extend( - [ - LightConvDecoderLayer( - args, no_encoder_attn, kernel_size=args.decoder_kernel_size_list[i] - ) - for i in range(args.decoder_layers) - ] - ) - - self.adaptive_softmax = None - - self.project_out_dim = ( - Linear(embed_dim, output_embed_dim, bias=False) - if embed_dim != output_embed_dim and not args.tie_adaptive_weights - else None - ) - - if args.adaptive_softmax_cutoff is not None: - self.adaptive_softmax = AdaptiveSoftmax( - len(dictionary), - output_embed_dim, - utils.eval_str_list(args.adaptive_softmax_cutoff, type=int), - dropout=args.adaptive_softmax_dropout, - adaptive_inputs=embed_tokens if args.tie_adaptive_weights else None, - factor=args.adaptive_softmax_factor, - tie_proj=args.tie_adaptive_proj, - ) - elif not self.share_input_output_embed: - self.embed_out = nn.Parameter( - torch.Tensor(len(dictionary), output_embed_dim) - ) - nn.init.normal_(self.embed_out, mean=0, std=output_embed_dim ** -0.5) - self.register_buffer("version", torch.Tensor([2])) - self.normalize = args.decoder_normalize_before and final_norm - if self.normalize: - self.layer_norm = LayerNorm(embed_dim) - - def forward( - self, prev_output_tokens, encoder_out=None, incremental_state=None, **kwargs - ): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (Tensor, optional): output from the encoder, used for - encoder-side attention - incremental_state (dict): dictionary used for storing state during - :ref:`Incremental decoding` - - Returns: - tuple: - - the last decoder layer's output of shape `(batch, tgt_len, - vocab)` - - the last decoder layer's attention weights of shape `(batch, - tgt_len, src_len)` - """ - # embed positions - positions = ( - self.embed_positions( - prev_output_tokens, - incremental_state=incremental_state, - ) - if self.embed_positions is not None - else None - ) - - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - if positions is not None: - positions = positions[:, -1:] - - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(prev_output_tokens) - - if self.project_in_dim is not None: - x = self.project_in_dim(x) - - if positions is not None: - x += positions - x = self.dropout_module(x) - - # B x T x C -> T x B x C - x = x.transpose(0, 1) - attn = None - - inner_states = [x] - - # decoder layers - for layer in self.layers: - x, attn = layer( - x, - encoder_out["encoder_out"] if encoder_out is not None else None, - encoder_out["encoder_padding_mask"] - if encoder_out is not None - else None, - incremental_state, - ) - inner_states.append(x) - - if self.normalize: - x = self.layer_norm(x) - - # T x B x C -> B x T x C - x = x.transpose(0, 1) - - if self.project_out_dim is not None: - x = self.project_out_dim(x) - - if self.adaptive_softmax is None: - # project back to size of vocabulary - if self.share_input_output_embed: - x = F.linear(x, self.embed_tokens.weight) - else: - x = F.linear(x, self.embed_out) - - return x, {"attn": attn, "inner_states": inner_states} - - def max_positions(self): - """Maximum output length supported by the decoder.""" - if self.embed_positions is None: - return self.max_target_positions - return min(self.max_target_positions, self.embed_positions.max_positions) - - def buffered_future_mask(self, tensor): - dim = tensor.size(0) - if ( - not hasattr(self, "_future_mask") - or self._future_mask is None - or self._future_mask.device != tensor.device - ): - self._future_mask = torch.triu( - utils.fill_with_neg_inf(tensor.new(dim, dim)), 1 - ) - if self._future_mask.size(0) < dim: - self._future_mask = torch.triu( - utils.fill_with_neg_inf(self._future_mask.resize_(dim, dim)), 1 - ) - return self._future_mask[:dim, :dim] - - -class LightConvEncoderLayer(nn.Module): - """Encoder layer block. - - Args: - args (argparse.Namespace): parsed command-line arguments - kernel_size: kernel size of the convolution - """ - - def __init__(self, args, kernel_size=0): - super().__init__() - self.embed_dim = args.encoder_embed_dim - self.conv_dim = args.encoder_conv_dim - padding_l = ( - kernel_size // 2 - if kernel_size % 2 == 1 - else ((kernel_size - 1) // 2, kernel_size // 2) - ) - - if args.encoder_glu: - self.linear1 = Linear(self.embed_dim, 2 * self.conv_dim) - self.act = nn.GLU() - else: - self.linear1 = Linear(self.embed_dim, self.conv_dim) - self.act = None - if args.encoder_conv_type == "lightweight": - self.conv = LightweightConv( - self.conv_dim, - kernel_size, - padding_l=padding_l, - weight_softmax=args.weight_softmax, - num_heads=args.encoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - elif args.encoder_conv_type == "dynamic": - self.conv = DynamicConv( - self.conv_dim, - kernel_size, - padding_l=padding_l, - weight_softmax=args.weight_softmax, - num_heads=args.encoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - else: - raise NotImplementedError - self.linear2 = Linear(self.conv_dim, self.embed_dim) - - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.relu_dropout_module = FairseqDropout( - args.relu_dropout, module_name=self.__class__.__name__ - ) - self.input_dropout_module = FairseqDropout( - args.input_dropout, module_name=self.__class__.__name__ - ) - self.normalize_before = args.encoder_normalize_before - self.fc1 = Linear(self.embed_dim, args.encoder_ffn_embed_dim) - self.fc2 = Linear(args.encoder_ffn_embed_dim, self.embed_dim) - self.layer_norms = nn.ModuleList([LayerNorm(self.embed_dim) for _ in range(2)]) - - def forward(self, x, encoder_padding_mask): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, src_len)` where padding elements are indicated by ``1``. - - Returns: - encoded output of shape `(batch, src_len, embed_dim)` - """ - residual = x - x = self.maybe_layer_norm(0, x, before=True) - x = self.input_dropout_module(x) - x = self.linear1(x) - if self.act is not None: - x = self.act(x) - if encoder_padding_mask is not None: - x = x.masked_fill(encoder_padding_mask.transpose(0, 1).unsqueeze(2), 0) - x = self.conv(x) - x = self.linear2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(0, x, after=True) - - residual = x - x = self.maybe_layer_norm(1, x, before=True) - x = F.relu(self.fc1(x)) - x = self.relu_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(1, x, after=True) - return x - - def maybe_layer_norm(self, i, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return self.layer_norms[i](x) - else: - return x - - def extra_repr(self): - return ( - "dropout={}, relu_dropout={}, input_dropout={}, normalize_before={}".format( - self.dropout_module.p, - self.relu_dropout_module.p, - self.input_dropout_module.p, - self.normalize_before, - ) - ) - - -class LightConvDecoderLayer(nn.Module): - """Decoder layer block. - - Args: - args (argparse.Namespace): parsed command-line arguments - no_encoder_attn (bool, optional): whether to attend to encoder outputs. - Default: ``False`` - kernel_size: kernel size of the convolution - """ - - def __init__(self, args, no_encoder_attn=False, kernel_size=0): - super().__init__() - self.embed_dim = args.decoder_embed_dim - self.conv_dim = args.decoder_conv_dim - if args.decoder_glu: - self.linear1 = Linear(self.embed_dim, 2 * self.conv_dim) - self.act = nn.GLU() - else: - self.linear1 = Linear(self.embed_dim, self.conv_dim) - self.act = None - if args.decoder_conv_type == "lightweight": - self.conv = LightweightConv( - self.conv_dim, - kernel_size, - padding_l=kernel_size - 1, - weight_softmax=args.weight_softmax, - num_heads=args.decoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - elif args.decoder_conv_type == "dynamic": - self.conv = DynamicConv( - self.conv_dim, - kernel_size, - padding_l=kernel_size - 1, - weight_softmax=args.weight_softmax, - num_heads=args.decoder_attention_heads, - weight_dropout=args.weight_dropout, - ) - else: - raise NotImplementedError - self.linear2 = Linear(self.conv_dim, self.embed_dim) - - self.dropout_module = FairseqDropout( - args.dropout, module_name=self.__class__.__name__ - ) - self.relu_dropout_module = FairseqDropout( - args.relu_dropout, module_name=self.__class__.__name__ - ) - self.input_dropout_module = FairseqDropout( - args.input_dropout, module_name=self.__class__.__name__ - ) - self.normalize_before = args.decoder_normalize_before - - self.conv_layer_norm = LayerNorm(self.embed_dim) - - if no_encoder_attn: - self.encoder_attn = None - self.encoder_attn_layer_norm = None - else: - self.encoder_attn = MultiheadAttention( - self.embed_dim, - args.decoder_attention_heads, - dropout=args.attention_dropout, - encoder_decoder_attention=True, - ) - self.encoder_attn_layer_norm = LayerNorm(self.embed_dim) - - self.fc1 = Linear(self.embed_dim, args.decoder_ffn_embed_dim) - self.fc2 = Linear(args.decoder_ffn_embed_dim, self.embed_dim) - - self.final_layer_norm = LayerNorm(self.embed_dim) - self.need_attn = True - - def forward( - self, - x, - encoder_out, - encoder_padding_mask, - incremental_state, - prev_conv_state=None, - prev_attn_state=None, - conv_mask=None, - conv_padding_mask=None, - ): - """ - Args: - x (Tensor): input to the layer of shape `(seq_len, batch, embed_dim)` - encoder_padding_mask (ByteTensor): binary ByteTensor of shape - `(batch, src_len)` where padding elements are indicated by ``1``. - - Returns: - encoded output of shape `(batch, src_len, embed_dim)` - """ - residual = x - x = self.maybe_layer_norm(self.conv_layer_norm, x, before=True) - if prev_conv_state is not None: - if incremental_state is None: - incremental_state = {} - self.conv._set_input_buffer(incremental_state, prev_conv_state) - x = self.input_dropout_module(x) - x = self.linear1(x) - if self.act is not None: - x = self.act(x) - x = self.conv(x, incremental_state=incremental_state) - x = self.linear2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.conv_layer_norm, x, after=True) - - attn = None - if self.encoder_attn is not None: - residual = x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, before=True) - if prev_attn_state is not None: - if incremental_state is None: - incremental_state = {} - prev_key, prev_value = prev_attn_state - saved_state = {"prev_key": prev_key, "prev_value": prev_value} - self.encoder_attn._set_input_buffer(incremental_state, saved_state) - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - need_weights=(not self.training and self.need_attn), - ) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.encoder_attn_layer_norm, x, after=True) - - residual = x - x = self.maybe_layer_norm(self.final_layer_norm, x, before=True) - x = F.relu(self.fc1(x)) - x = self.relu_dropout_module(x) - x = self.fc2(x) - x = self.dropout_module(x) - x = residual + x - x = self.maybe_layer_norm(self.final_layer_norm, x, after=True) - return x, attn - - def maybe_layer_norm(self, layer_norm, x, before=False, after=False): - assert before ^ after - if after ^ self.normalize_before: - return layer_norm(x) - else: - return x - - def make_generation_fast_(self, need_attn=False, **kwargs): - self.need_attn = need_attn - - def extra_repr(self): - return ( - "dropout={}, relu_dropout={}, input_dropout={}, normalize_before={}".format( - self.dropout_module.p, - self.relu_dropout_module.p, - self.input_dropout_module.p, - self.normalize_before, - ) - ) - - -def Embedding(num_embeddings, embedding_dim, padding_idx): - m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx) - nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(m.weight[padding_idx], 0) - return m - - -def Linear(in_features, out_features, bias=True): - m = nn.Linear(in_features, out_features, bias) - nn.init.xavier_uniform_(m.weight) - if bias: - nn.init.constant_(m.bias, 0.0) - return m - - -@register_model_architecture("lightconv", "lightconv") -def base_architecture(args): - args.encoder_embed_path = getattr(args, "encoder_embed_path", None) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 2048) - args.encoder_layers = getattr(args, "encoder_layers", 7) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 8) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", False) - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", args.encoder_embed_dim) - args.decoder_ffn_embed_dim = getattr( - args, "decoder_ffn_embed_dim", args.encoder_ffn_embed_dim - ) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 8) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", False) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", False) - args.attention_dropout = getattr(args, "attention_dropout", 0.0) - args.relu_dropout = getattr(args, "relu_dropout", 0.0) - args.dropout = getattr(args, "dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", False - ) - args.share_all_embeddings = getattr(args, "share_all_embeddings", False) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.encoder_conv_dim = getattr(args, "encoder_conv_dim", args.encoder_embed_dim) - args.decoder_conv_dim = getattr(args, "decoder_conv_dim", args.decoder_embed_dim) - - args.encoder_kernel_size_list = getattr( - args, "encoder_kernel_size_list", [3, 7, 15, 31, 31, 31, 31] - ) - args.decoder_kernel_size_list = getattr( - args, "decoder_kernel_size_list", [3, 7, 15, 31, 31, 31] - ) - if len(args.encoder_kernel_size_list) == 1: - args.encoder_kernel_size_list = ( - args.encoder_kernel_size_list * args.encoder_layers - ) - if len(args.decoder_kernel_size_list) == 1: - args.decoder_kernel_size_list = ( - args.decoder_kernel_size_list * args.decoder_layers - ) - assert ( - len(args.encoder_kernel_size_list) == args.encoder_layers - ), "encoder_kernel_size_list doesn't match encoder_layers" - assert ( - len(args.decoder_kernel_size_list) == args.decoder_layers - ), "decoder_kernel_size_list doesn't match decoder_layers" - args.encoder_glu = getattr(args, "encoder_glu", True) - args.decoder_glu = getattr(args, "decoder_glu", True) - args.input_dropout = getattr(args, "input_dropout", 0.1) - args.weight_dropout = getattr(args, "weight_dropout", args.attention_dropout) - - -@register_model_architecture("lightconv", "lightconv_iwslt_de_en") -def lightconv_iwslt_de_en(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.encoder_layers = getattr(args, "encoder_layers", 7) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.decoder_layers = getattr(args, "decoder_layers", 6) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.weight_dropout = getattr(args, "weight_dropout", 0.1) - args.encoder_glu = getattr(args, "encoder_glu", False) - args.decoder_glu = getattr(args, "decoder_glu", False) - args.input_dropout = getattr(args, "input_dropout", 0.0) - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_de") -def lightconv_wmt_en_de(args): - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_de_big") -def lightconv_wmt_en_de_big(args): - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.dropout = getattr(args, "dropout", 0.3) - base_architecture(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_en_fr_big") -def lightconv_wmt_en_fr_big(args): - args.dropout = getattr(args, "dropout", 0.1) - lightconv_wmt_en_de_big(args) - - -@register_model_architecture("lightconv", "lightconv_wmt_zh_en_big") -def lightconv_wmt_zh_en_big(args): - args.dropout = getattr(args, "dropout", 0.2) - args.attention_dropout = getattr(args, "attention_dropout", 0.2) - args.weight_dropout = getattr(args, "weight_dropout", 0.2) - lightconv_wmt_en_de_big(args) diff --git a/spaces/OneAfterlife/MubertTTM/app.py b/spaces/OneAfterlife/MubertTTM/app.py deleted file mode 100644 index 8d5407e652859791d3401655e0ca875b60011218..0000000000000000000000000000000000000000 --- a/spaces/OneAfterlife/MubertTTM/app.py +++ /dev/null @@ -1,98 +0,0 @@ -import time - -import gradio as gr -from sentence_transformers import SentenceTransformer - -import httpx -import json - -from utils import get_tags_for_prompts, get_mubert_tags_embeddings, get_pat - -minilm = SentenceTransformer('all-MiniLM-L6-v2') -mubert_tags_embeddings = get_mubert_tags_embeddings(minilm) - - -def get_track_by_tags(tags, pat, duration, maxit=20, loop=False): - if loop: - mode = "loop" - else: - mode = "track" - r = httpx.post('https://api-b2b.mubert.com/v2/RecordTrackTTM', - json={ - "method": "RecordTrackTTM", - "params": { - "pat": pat, - "duration": duration, - "tags": tags, - "mode": mode - } - }) - - rdata = json.loads(r.text) - assert rdata['status'] == 1, rdata['error']['text'] - trackurl = rdata['data']['tasks'][0]['download_link'] - - print('Generating track ', end='') - for i in range(maxit): - r = httpx.get(trackurl) - if r.status_code == 200: - return trackurl - time.sleep(1) - - -def generate_track_by_prompt(email, prompt, duration, loop=False): - try: - pat = get_pat(email) - _, tags = get_tags_for_prompts(minilm, mubert_tags_embeddings, [prompt, ])[0] - return get_track_by_tags(tags, pat, int(duration), loop=loop), "Success", ",".join(tags) - except Exception as e: - return None, str(e), "" - - -block = gr.Blocks() - -with block: - gr.HTML( - """ -
-
-

- Mubert Text to Music -

-
-

- All music is generated by Mubert API – www.mubert.com -

-
- """ - ) - with gr.Group(): - with gr.Box(): - email = gr.Textbox(label="Enter your email (for API token)") - prompt = gr.Textbox(label="Key prompts to generate a track (genre, theme, etc.)") - duration = gr.Slider(label="Duration (seconds)", value=60, maximum=300) - is_loop = gr.Checkbox(label="Generate loop") - out = gr.Audio() - result_msg = gr.Text(label="Result message") - tags = gr.Text(label="Interpreted tags from your key prompts") - btn = gr.Button("Submit").style(full_width=True) - - btn.click(fn=generate_track_by_prompt, inputs=[email, prompt, duration, is_loop], outputs=[out, result_msg, tags]) - - gr.HTML(''' - - -

- if you put anything over 250 seconds, you will need to wait 10 or 30 second after it is done processing. - - ''') - -block.launch() \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/notes/contributing.md b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/notes/contributing.md deleted file mode 100644 index 95181235eaff1cb5cbb2dc554e8d4991b603d0e5..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docs/notes/contributing.md +++ /dev/null @@ -1 +0,0 @@ -../../.github/CONTRIBUTING.md \ No newline at end of file diff --git a/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/metrics/compute_metrics.py b/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/metrics/compute_metrics.py deleted file mode 100644 index 65e01c32ebbdcca4d02f25602a0efeb274154e66..0000000000000000000000000000000000000000 --- a/spaces/Oumar199/Fake-Real-Face-Detection/fake_face_detection/metrics/compute_metrics.py +++ /dev/null @@ -1,33 +0,0 @@ - -import numpy as np -import evaluate - -metrics = { - 'f1': evaluate.load('f1'), - 'accuracy': evaluate.load('accuracy'), - 'roc_auc': evaluate.load('roc_auc', 'multiclass') -} - -def compute_metrics(p): # some part was got from https://huggingface.co/blog/fine-tune-vit - - predictions, label_ids = p - - metric = metrics['accuracy'].compute(predictions = np.argmax(predictions, axis = 1), references=label_ids) - - f1_score = metrics['f1'].compute(predictions=np.argmax(predictions, axis = 1), references=label_ids) - - metric.update(f1_score) - - try: - - auc = metrics['roc_auc'].compute(prediction_scores=predictions, references=label_ids) - - metric.update(auc) - - except: - - pass - - return metric - - diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/fcn_unet_s5-d16.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/fcn_unet_s5-d16.py deleted file mode 100644 index a33e7972877f902d0e7d18401ca675e3e4e60a18..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/configs/_base_/models/fcn_unet_s5-d16.py +++ /dev/null @@ -1,51 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained=None, - backbone=dict( - type='UNet', - in_channels=3, - base_channels=64, - num_stages=5, - strides=(1, 1, 1, 1, 1), - enc_num_convs=(2, 2, 2, 2, 2), - dec_num_convs=(2, 2, 2, 2), - downsamples=(True, True, True, True), - enc_dilations=(1, 1, 1, 1, 1), - dec_dilations=(1, 1, 1, 1), - with_cp=False, - conv_cfg=None, - norm_cfg=norm_cfg, - act_cfg=dict(type='ReLU'), - upsample_cfg=dict(type='InterpConv'), - norm_eval=False), - decode_head=dict( - type='FCNHead', - in_channels=64, - in_index=4, - channels=64, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=128, - in_index=3, - channels=64, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=2, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='slide', crop_size=256, stride=170)) diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/vm/dwarf.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/vm/dwarf.go deleted file mode 100644 index 46bf9a4eae67971838c7b6a4a26b17d630195575..0000000000000000000000000000000000000000 --- a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/vm/dwarf.go +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:34e32ed0bbbfc3abe88444d0fe4c1c5aa292581ce9fea4629ddd1e7685fd6799 -size 1142437 diff --git a/spaces/Purple11/Grounded-Diffusion/ldm/modules/image_degradation/utils_image.py b/spaces/Purple11/Grounded-Diffusion/ldm/modules/image_degradation/utils_image.py deleted file mode 100644 index 0175f155ad900ae33c3c46ed87f49b352e3faf98..0000000000000000000000000000000000000000 --- a/spaces/Purple11/Grounded-Diffusion/ldm/modules/image_degradation/utils_image.py +++ /dev/null @@ -1,916 +0,0 @@ -import os -import math -import random -import numpy as np -import torch -import cv2 -from torchvision.utils import make_grid -from datetime import datetime -#import matplotlib.pyplot as plt # TODO: check with Dominik, also bsrgan.py vs bsrgan_light.py - - -os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE" - - -''' -# -------------------------------------------- -# Kai Zhang (github: https://github.com/cszn) -# 03/Mar/2019 -# -------------------------------------------- -# https://github.com/twhui/SRGAN-pyTorch -# https://github.com/xinntao/BasicSR -# -------------------------------------------- -''' - - -IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tif'] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - - -def get_timestamp(): - return datetime.now().strftime('%y%m%d-%H%M%S') - - -def imshow(x, title=None, cbar=False, figsize=None): - plt.figure(figsize=figsize) - plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray') - if title: - plt.title(title) - if cbar: - plt.colorbar() - plt.show() - - -def surf(Z, cmap='rainbow', figsize=None): - plt.figure(figsize=figsize) - ax3 = plt.axes(projection='3d') - - w, h = Z.shape[:2] - xx = np.arange(0,w,1) - yy = np.arange(0,h,1) - X, Y = np.meshgrid(xx, yy) - ax3.plot_surface(X,Y,Z,cmap=cmap) - #ax3.contour(X,Y,Z, zdim='z',offset=-2,cmap=cmap) - plt.show() - - -''' -# -------------------------------------------- -# get image pathes -# -------------------------------------------- -''' - - -def get_image_paths(dataroot): - paths = None # return None if dataroot is None - if dataroot is not None: - paths = sorted(_get_paths_from_images(dataroot)) - return paths - - -def _get_paths_from_images(path): - assert os.path.isdir(path), '{:s} is not a valid directory'.format(path) - images = [] - for dirpath, _, fnames in sorted(os.walk(path)): - for fname in sorted(fnames): - if is_image_file(fname): - img_path = os.path.join(dirpath, fname) - images.append(img_path) - assert images, '{:s} has no valid image file'.format(path) - return images - - -''' -# -------------------------------------------- -# split large images into small images -# -------------------------------------------- -''' - - -def patches_from_image(img, p_size=512, p_overlap=64, p_max=800): - w, h = img.shape[:2] - patches = [] - if w > p_max and h > p_max: - w1 = list(np.arange(0, w-p_size, p_size-p_overlap, dtype=np.int)) - h1 = list(np.arange(0, h-p_size, p_size-p_overlap, dtype=np.int)) - w1.append(w-p_size) - h1.append(h-p_size) -# print(w1) -# print(h1) - for i in w1: - for j in h1: - patches.append(img[i:i+p_size, j:j+p_size,:]) - else: - patches.append(img) - - return patches - - -def imssave(imgs, img_path): - """ - imgs: list, N images of size WxHxC - """ - img_name, ext = os.path.splitext(os.path.basename(img_path)) - - for i, img in enumerate(imgs): - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - new_path = os.path.join(os.path.dirname(img_path), img_name+str('_s{:04d}'.format(i))+'.png') - cv2.imwrite(new_path, img) - - -def split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000): - """ - split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size), - and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max) - will be splitted. - Args: - original_dataroot: - taget_dataroot: - p_size: size of small images - p_overlap: patch size in training is a good choice - p_max: images with smaller size than (p_max)x(p_max) keep unchanged. - """ - paths = get_image_paths(original_dataroot) - for img_path in paths: - # img_name, ext = os.path.splitext(os.path.basename(img_path)) - img = imread_uint(img_path, n_channels=n_channels) - patches = patches_from_image(img, p_size, p_overlap, p_max) - imssave(patches, os.path.join(taget_dataroot,os.path.basename(img_path))) - #if original_dataroot == taget_dataroot: - #del img_path - -''' -# -------------------------------------------- -# makedir -# -------------------------------------------- -''' - - -def mkdir(path): - if not os.path.exists(path): - os.makedirs(path) - - -def mkdirs(paths): - if isinstance(paths, str): - mkdir(paths) - else: - for path in paths: - mkdir(path) - - -def mkdir_and_rename(path): - if os.path.exists(path): - new_name = path + '_archived_' + get_timestamp() - print('Path already exists. Rename it to [{:s}]'.format(new_name)) - os.rename(path, new_name) - os.makedirs(path) - - -''' -# -------------------------------------------- -# read image from path -# opencv is fast, but read BGR numpy image -# -------------------------------------------- -''' - - -# -------------------------------------------- -# get uint8 image of size HxWxn_channles (RGB) -# -------------------------------------------- -def imread_uint(path, n_channels=3): - # input: path - # output: HxWx3(RGB or GGG), or HxWx1 (G) - if n_channels == 1: - img = cv2.imread(path, 0) # cv2.IMREAD_GRAYSCALE - img = np.expand_dims(img, axis=2) # HxWx1 - elif n_channels == 3: - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # BGR or G - if img.ndim == 2: - img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # GGG - else: - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB - return img - - -# -------------------------------------------- -# matlab's imwrite -# -------------------------------------------- -def imsave(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - -def imwrite(img, img_path): - img = np.squeeze(img) - if img.ndim == 3: - img = img[:, :, [2, 1, 0]] - cv2.imwrite(img_path, img) - - - -# -------------------------------------------- -# get single image of size HxWxn_channles (BGR) -# -------------------------------------------- -def read_img(path): - # read image by cv2 - # return: Numpy float32, HWC, BGR, [0,1] - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # cv2.IMREAD_GRAYSCALE - img = img.astype(np.float32) / 255. - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - # some images have 4 channels - if img.shape[2] > 3: - img = img[:, :, :3] - return img - - -''' -# -------------------------------------------- -# image format conversion -# -------------------------------------------- -# numpy(single) <---> numpy(unit) -# numpy(single) <---> tensor -# numpy(unit) <---> tensor -# -------------------------------------------- -''' - - -# -------------------------------------------- -# numpy(single) [0, 1] <---> numpy(unit) -# -------------------------------------------- - - -def uint2single(img): - - return np.float32(img/255.) - - -def single2uint(img): - - return np.uint8((img.clip(0, 1)*255.).round()) - - -def uint162single(img): - - return np.float32(img/65535.) - - -def single2uint16(img): - - return np.uint16((img.clip(0, 1)*65535.).round()) - - -# -------------------------------------------- -# numpy(unit) (HxWxC or HxW) <---> tensor -# -------------------------------------------- - - -# convert uint to 4-dimensional torch tensor -def uint2tensor4(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0) - - -# convert uint to 3-dimensional torch tensor -def uint2tensor3(img): - if img.ndim == 2: - img = np.expand_dims(img, axis=2) - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.) - - -# convert 2/3/4-dimensional torch tensor to uint -def tensor2uint(img): - img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - return np.uint8((img*255.0).round()) - - -# -------------------------------------------- -# numpy(single) (HxWxC) <---> tensor -# -------------------------------------------- - - -# convert single (HxWxC) to 3-dimensional torch tensor -def single2tensor3(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float() - - -# convert single (HxWxC) to 4-dimensional torch tensor -def single2tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0) - - -# convert torch tensor to single -def tensor2single(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - - return img - -# convert torch tensor to single -def tensor2single3(img): - img = img.data.squeeze().float().cpu().numpy() - if img.ndim == 3: - img = np.transpose(img, (1, 2, 0)) - elif img.ndim == 2: - img = np.expand_dims(img, axis=2) - return img - - -def single2tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float().unsqueeze(0) - - -def single32tensor5(img): - return torch.from_numpy(np.ascontiguousarray(img)).float().unsqueeze(0).unsqueeze(0) - - -def single42tensor4(img): - return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float() - - -# from skimage.io import imread, imsave -def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)): - ''' - Converts a torch Tensor into an image Numpy array of BGR channel order - Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order - Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default) - ''' - tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp - tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1] - n_dim = tensor.dim() - if n_dim == 4: - n_img = len(tensor) - img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 3: - img_np = tensor.numpy() - img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR - elif n_dim == 2: - img_np = tensor.numpy() - else: - raise TypeError( - 'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim)) - if out_type == np.uint8: - img_np = (img_np * 255.0).round() - # Important. Unlike matlab, numpy.unit8() WILL NOT round by default. - return img_np.astype(out_type) - - -''' -# -------------------------------------------- -# Augmentation, flipe and/or rotate -# -------------------------------------------- -# The following two are enough. -# (1) augmet_img: numpy image of WxHxC or WxH -# (2) augment_img_tensor4: tensor image 1xCxWxH -# -------------------------------------------- -''' - - -def augment_img(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return np.flipud(np.rot90(img)) - elif mode == 2: - return np.flipud(img) - elif mode == 3: - return np.rot90(img, k=3) - elif mode == 4: - return np.flipud(np.rot90(img, k=2)) - elif mode == 5: - return np.rot90(img) - elif mode == 6: - return np.rot90(img, k=2) - elif mode == 7: - return np.flipud(np.rot90(img, k=3)) - - -def augment_img_tensor4(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - if mode == 0: - return img - elif mode == 1: - return img.rot90(1, [2, 3]).flip([2]) - elif mode == 2: - return img.flip([2]) - elif mode == 3: - return img.rot90(3, [2, 3]) - elif mode == 4: - return img.rot90(2, [2, 3]).flip([2]) - elif mode == 5: - return img.rot90(1, [2, 3]) - elif mode == 6: - return img.rot90(2, [2, 3]) - elif mode == 7: - return img.rot90(3, [2, 3]).flip([2]) - - -def augment_img_tensor(img, mode=0): - '''Kai Zhang (github: https://github.com/cszn) - ''' - img_size = img.size() - img_np = img.data.cpu().numpy() - if len(img_size) == 3: - img_np = np.transpose(img_np, (1, 2, 0)) - elif len(img_size) == 4: - img_np = np.transpose(img_np, (2, 3, 1, 0)) - img_np = augment_img(img_np, mode=mode) - img_tensor = torch.from_numpy(np.ascontiguousarray(img_np)) - if len(img_size) == 3: - img_tensor = img_tensor.permute(2, 0, 1) - elif len(img_size) == 4: - img_tensor = img_tensor.permute(3, 2, 0, 1) - - return img_tensor.type_as(img) - - -def augment_img_np3(img, mode=0): - if mode == 0: - return img - elif mode == 1: - return img.transpose(1, 0, 2) - elif mode == 2: - return img[::-1, :, :] - elif mode == 3: - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 4: - return img[:, ::-1, :] - elif mode == 5: - img = img[:, ::-1, :] - img = img.transpose(1, 0, 2) - return img - elif mode == 6: - img = img[:, ::-1, :] - img = img[::-1, :, :] - return img - elif mode == 7: - img = img[:, ::-1, :] - img = img[::-1, :, :] - img = img.transpose(1, 0, 2) - return img - - -def augment_imgs(img_list, hflip=True, rot=True): - # horizontal flip OR rotate - hflip = hflip and random.random() < 0.5 - vflip = rot and random.random() < 0.5 - rot90 = rot and random.random() < 0.5 - - def _augment(img): - if hflip: - img = img[:, ::-1, :] - if vflip: - img = img[::-1, :, :] - if rot90: - img = img.transpose(1, 0, 2) - return img - - return [_augment(img) for img in img_list] - - -''' -# -------------------------------------------- -# modcrop and shave -# -------------------------------------------- -''' - - -def modcrop(img_in, scale): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - if img.ndim == 2: - H, W = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r] - elif img.ndim == 3: - H, W, C = img.shape - H_r, W_r = H % scale, W % scale - img = img[:H - H_r, :W - W_r, :] - else: - raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim)) - return img - - -def shave(img_in, border=0): - # img_in: Numpy, HWC or HW - img = np.copy(img_in) - h, w = img.shape[:2] - img = img[border:h-border, border:w-border] - return img - - -''' -# -------------------------------------------- -# image processing process on numpy image -# channel_convert(in_c, tar_type, img_list): -# rgb2ycbcr(img, only_y=True): -# bgr2ycbcr(img, only_y=True): -# ycbcr2rgb(img): -# -------------------------------------------- -''' - - -def rgb2ycbcr(img, only_y=True): - '''same as matlab rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786], - [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def ycbcr2rgb(img): - '''same as matlab ycbcr2rgb - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071], - [0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def bgr2ycbcr(img, only_y=True): - '''bgr version of rgb2ycbcr - only_y: only return Y channel - Input: - uint8, [0, 255] - float, [0, 1] - ''' - in_img_type = img.dtype - img.astype(np.float32) - if in_img_type != np.uint8: - img *= 255. - # convert - if only_y: - rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0 - else: - rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786], - [65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128] - if in_img_type == np.uint8: - rlt = rlt.round() - else: - rlt /= 255. - return rlt.astype(in_img_type) - - -def channel_convert(in_c, tar_type, img_list): - # conversion among BGR, gray and y - if in_c == 3 and tar_type == 'gray': # BGR to gray - gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list] - return [np.expand_dims(img, axis=2) for img in gray_list] - elif in_c == 3 and tar_type == 'y': # BGR to y - y_list = [bgr2ycbcr(img, only_y=True) for img in img_list] - return [np.expand_dims(img, axis=2) for img in y_list] - elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR - return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list] - else: - return img_list - - -''' -# -------------------------------------------- -# metric, PSNR and SSIM -# -------------------------------------------- -''' - - -# -------------------------------------------- -# PSNR -# -------------------------------------------- -def calculate_psnr(img1, img2, border=0): - # img1 and img2 have range [0, 255] - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - mse = np.mean((img1 - img2)**2) - if mse == 0: - return float('inf') - return 20 * math.log10(255.0 / math.sqrt(mse)) - - -# -------------------------------------------- -# SSIM -# -------------------------------------------- -def calculate_ssim(img1, img2, border=0): - '''calculate SSIM - the same outputs as MATLAB's - img1, img2: [0, 255] - ''' - #img1 = img1.squeeze() - #img2 = img2.squeeze() - if not img1.shape == img2.shape: - raise ValueError('Input images must have the same dimensions.') - h, w = img1.shape[:2] - img1 = img1[border:h-border, border:w-border] - img2 = img2[border:h-border, border:w-border] - - if img1.ndim == 2: - return ssim(img1, img2) - elif img1.ndim == 3: - if img1.shape[2] == 3: - ssims = [] - for i in range(3): - ssims.append(ssim(img1[:,:,i], img2[:,:,i])) - return np.array(ssims).mean() - elif img1.shape[2] == 1: - return ssim(np.squeeze(img1), np.squeeze(img2)) - else: - raise ValueError('Wrong input image dimensions.') - - -def ssim(img1, img2): - C1 = (0.01 * 255)**2 - C2 = (0.03 * 255)**2 - - img1 = img1.astype(np.float64) - img2 = img2.astype(np.float64) - kernel = cv2.getGaussianKernel(11, 1.5) - window = np.outer(kernel, kernel.transpose()) - - mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid - mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5] - mu1_sq = mu1**2 - mu2_sq = mu2**2 - mu1_mu2 = mu1 * mu2 - sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq - sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq - sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2 - - ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * - (sigma1_sq + sigma2_sq + C2)) - return ssim_map.mean() - - -''' -# -------------------------------------------- -# matlab's bicubic imresize (numpy and torch) [0, 1] -# -------------------------------------------- -''' - - -# matlab 'imresize' function, now only support 'bicubic' -def cubic(x): - absx = torch.abs(x) - absx2 = absx**2 - absx3 = absx**3 - return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \ - (-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx)) - - -def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing): - if (scale < 1) and (antialiasing): - # Use a modified kernel to simultaneously interpolate and antialias- larger kernel width - kernel_width = kernel_width / scale - - # Output-space coordinates - x = torch.linspace(1, out_length, out_length) - - # Input-space coordinates. Calculate the inverse mapping such that 0.5 - # in output space maps to 0.5 in input space, and 0.5+scale in output - # space maps to 1.5 in input space. - u = x / scale + 0.5 * (1 - 1 / scale) - - # What is the left-most pixel that can be involved in the computation? - left = torch.floor(u - kernel_width / 2) - - # What is the maximum number of pixels that can be involved in the - # computation? Note: it's OK to use an extra pixel here; if the - # corresponding weights are all zero, it will be eliminated at the end - # of this function. - P = math.ceil(kernel_width) + 2 - - # The indices of the input pixels involved in computing the k-th output - # pixel are in row k of the indices matrix. - indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view( - 1, P).expand(out_length, P) - - # The weights used to compute the k-th output pixel are in row k of the - # weights matrix. - distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices - # apply cubic kernel - if (scale < 1) and (antialiasing): - weights = scale * cubic(distance_to_center * scale) - else: - weights = cubic(distance_to_center) - # Normalize the weights matrix so that each row sums to 1. - weights_sum = torch.sum(weights, 1).view(out_length, 1) - weights = weights / weights_sum.expand(out_length, P) - - # If a column in weights is all zero, get rid of it. only consider the first and last column. - weights_zero_tmp = torch.sum((weights == 0), 0) - if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6): - indices = indices.narrow(1, 1, P - 2) - weights = weights.narrow(1, 1, P - 2) - if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6): - indices = indices.narrow(1, 0, P - 2) - weights = weights.narrow(1, 0, P - 2) - weights = weights.contiguous() - indices = indices.contiguous() - sym_len_s = -indices.min() + 1 - sym_len_e = indices.max() - in_length - indices = indices + sym_len_s - 1 - return weights, indices, int(sym_len_s), int(sym_len_e) - - -# -------------------------------------------- -# imresize for tensor image [0, 1] -# -------------------------------------------- -def imresize(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: pytorch tensor, CHW or HW [0,1] - # output: CHW or HW [0,1] w/o round - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(0) - in_C, in_H, in_W = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W) - img_aug.narrow(1, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:, :sym_len_Hs, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[:, -sym_len_He:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(in_C, out_H, in_W) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We) - out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :, :sym_len_Ws] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, :, -sym_len_We:] - inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(2, inv_idx) - out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(in_C, out_H, out_W) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - return out_2 - - -# -------------------------------------------- -# imresize for numpy image [0, 1] -# -------------------------------------------- -def imresize_np(img, scale, antialiasing=True): - # Now the scale should be the same for H and W - # input: img: Numpy, HWC or HW [0,1] - # output: HWC or HW [0,1] w/o round - img = torch.from_numpy(img) - need_squeeze = True if img.dim() == 2 else False - if need_squeeze: - img.unsqueeze_(2) - - in_H, in_W, in_C = img.size() - out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale) - kernel_width = 4 - kernel = 'cubic' - - # Return the desired dimension order for performing the resize. The - # strategy is to perform the resize first along the dimension with the - # smallest scale factor. - # Now we do not support this. - - # get weights and indices - weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices( - in_H, out_H, scale, kernel, kernel_width, antialiasing) - weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices( - in_W, out_W, scale, kernel, kernel_width, antialiasing) - # process H dimension - # symmetric copying - img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C) - img_aug.narrow(0, sym_len_Hs, in_H).copy_(img) - - sym_patch = img[:sym_len_Hs, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv) - - sym_patch = img[-sym_len_He:, :, :] - inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(0, inv_idx) - img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv) - - out_1 = torch.FloatTensor(out_H, in_W, in_C) - kernel_width = weights_H.size(1) - for i in range(out_H): - idx = int(indices_H[i][0]) - for j in range(out_C): - out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i]) - - # process W dimension - # symmetric copying - out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C) - out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1) - - sym_patch = out_1[:, :sym_len_Ws, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv) - - sym_patch = out_1[:, -sym_len_We:, :] - inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long() - sym_patch_inv = sym_patch.index_select(1, inv_idx) - out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv) - - out_2 = torch.FloatTensor(out_H, out_W, in_C) - kernel_width = weights_W.size(1) - for i in range(out_W): - idx = int(indices_W[i][0]) - for j in range(out_C): - out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i]) - if need_squeeze: - out_2.squeeze_() - - return out_2.numpy() - - -if __name__ == '__main__': - print('---') -# img = imread_uint('test.bmp', 3) -# img = uint2single(img) -# img_bicubic = imresize_np(img, 1/4) \ No newline at end of file diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/unixccompiler.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/unixccompiler.py deleted file mode 100644 index 4ab771a475df8f53f4054d7869366a2457397a09..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/unixccompiler.py +++ /dev/null @@ -1,401 +0,0 @@ -"""distutils.unixccompiler - -Contains the UnixCCompiler class, a subclass of CCompiler that handles -the "typical" Unix-style command-line C compiler: - * macros defined with -Dname[=value] - * macros undefined with -Uname - * include search directories specified with -Idir - * libraries specified with -lllib - * library search directories specified with -Ldir - * compile handled by 'cc' (or similar) executable with -c option: - compiles .c to .o - * link static library handled by 'ar' command (possibly with 'ranlib') - * link shared library handled by 'cc -shared' -""" - -import os -import sys -import re -import shlex -import itertools - -from distutils import sysconfig -from distutils.dep_util import newer -from distutils.ccompiler import CCompiler, gen_preprocess_options, gen_lib_options -from distutils.errors import DistutilsExecError, CompileError, LibError, LinkError -from distutils import log -from ._macos_compat import compiler_fixup - -# XXX Things not currently handled: -# * optimization/debug/warning flags; we just use whatever's in Python's -# Makefile and live with it. Is this adequate? If not, we might -# have to have a bunch of subclasses GNUCCompiler, SGICCompiler, -# SunCCompiler, and I suspect down that road lies madness. -# * even if we don't know a warning flag from an optimization flag, -# we need some way for outsiders to feed preprocessor/compiler/linker -# flags in to us -- eg. a sysadmin might want to mandate certain flags -# via a site config file, or a user might want to set something for -# compiling this module distribution only via the setup.py command -# line, whatever. As long as these options come from something on the -# current system, they can be as system-dependent as they like, and we -# should just happily stuff them into the preprocessor/compiler/linker -# options and carry on. - - -def _split_env(cmd): - """ - For macOS, split command into 'env' portion (if any) - and the rest of the linker command. - - >>> _split_env(['a', 'b', 'c']) - ([], ['a', 'b', 'c']) - >>> _split_env(['/usr/bin/env', 'A=3', 'gcc']) - (['/usr/bin/env', 'A=3'], ['gcc']) - """ - pivot = 0 - if os.path.basename(cmd[0]) == "env": - pivot = 1 - while '=' in cmd[pivot]: - pivot += 1 - return cmd[:pivot], cmd[pivot:] - - -def _split_aix(cmd): - """ - AIX platforms prefix the compiler with the ld_so_aix - script, so split that from the linker command. - - >>> _split_aix(['a', 'b', 'c']) - ([], ['a', 'b', 'c']) - >>> _split_aix(['/bin/foo/ld_so_aix', 'gcc']) - (['/bin/foo/ld_so_aix'], ['gcc']) - """ - pivot = os.path.basename(cmd[0]) == 'ld_so_aix' - return cmd[:pivot], cmd[pivot:] - - -def _linker_params(linker_cmd, compiler_cmd): - """ - The linker command usually begins with the compiler - command (possibly multiple elements), followed by zero or more - params for shared library building. - - If the LDSHARED env variable overrides the linker command, - however, the commands may not match. - - Return the best guess of the linker parameters by stripping - the linker command. If the compiler command does not - match the linker command, assume the linker command is - just the first element. - - >>> _linker_params('gcc foo bar'.split(), ['gcc']) - ['foo', 'bar'] - >>> _linker_params('gcc foo bar'.split(), ['other']) - ['foo', 'bar'] - >>> _linker_params('ccache gcc foo bar'.split(), 'ccache gcc'.split()) - ['foo', 'bar'] - >>> _linker_params(['gcc'], ['gcc']) - [] - """ - c_len = len(compiler_cmd) - pivot = c_len if linker_cmd[:c_len] == compiler_cmd else 1 - return linker_cmd[pivot:] - - -class UnixCCompiler(CCompiler): - - compiler_type = 'unix' - - # These are used by CCompiler in two places: the constructor sets - # instance attributes 'preprocessor', 'compiler', etc. from them, and - # 'set_executable()' allows any of these to be set. The defaults here - # are pretty generic; they will probably have to be set by an outsider - # (eg. using information discovered by the sysconfig about building - # Python extensions). - executables = { - 'preprocessor': None, - 'compiler': ["cc"], - 'compiler_so': ["cc"], - 'compiler_cxx': ["cc"], - 'linker_so': ["cc", "-shared"], - 'linker_exe': ["cc"], - 'archiver': ["ar", "-cr"], - 'ranlib': None, - } - - if sys.platform[:6] == "darwin": - executables['ranlib'] = ["ranlib"] - - # Needed for the filename generation methods provided by the base - # class, CCompiler. NB. whoever instantiates/uses a particular - # UnixCCompiler instance should set 'shared_lib_ext' -- we set a - # reasonable common default here, but it's not necessarily used on all - # Unices! - - src_extensions = [".c", ".C", ".cc", ".cxx", ".cpp", ".m"] - obj_extension = ".o" - static_lib_extension = ".a" - shared_lib_extension = ".so" - dylib_lib_extension = ".dylib" - xcode_stub_lib_extension = ".tbd" - static_lib_format = shared_lib_format = dylib_lib_format = "lib%s%s" - xcode_stub_lib_format = dylib_lib_format - if sys.platform == "cygwin": - exe_extension = ".exe" - - def preprocess( - self, - source, - output_file=None, - macros=None, - include_dirs=None, - extra_preargs=None, - extra_postargs=None, - ): - fixed_args = self._fix_compile_args(None, macros, include_dirs) - ignore, macros, include_dirs = fixed_args - pp_opts = gen_preprocess_options(macros, include_dirs) - pp_args = self.preprocessor + pp_opts - if output_file: - pp_args.extend(['-o', output_file]) - if extra_preargs: - pp_args[:0] = extra_preargs - if extra_postargs: - pp_args.extend(extra_postargs) - pp_args.append(source) - - # reasons to preprocess: - # - force is indicated - # - output is directed to stdout - # - source file is newer than the target - preprocess = self.force or output_file is None or newer(source, output_file) - if not preprocess: - return - - if output_file: - self.mkpath(os.path.dirname(output_file)) - - try: - self.spawn(pp_args) - except DistutilsExecError as msg: - raise CompileError(msg) - - def _compile(self, obj, src, ext, cc_args, extra_postargs, pp_opts): - compiler_so = compiler_fixup(self.compiler_so, cc_args + extra_postargs) - try: - self.spawn(compiler_so + cc_args + [src, '-o', obj] + extra_postargs) - except DistutilsExecError as msg: - raise CompileError(msg) - - def create_static_lib( - self, objects, output_libname, output_dir=None, debug=0, target_lang=None - ): - objects, output_dir = self._fix_object_args(objects, output_dir) - - output_filename = self.library_filename(output_libname, output_dir=output_dir) - - if self._need_link(objects, output_filename): - self.mkpath(os.path.dirname(output_filename)) - self.spawn(self.archiver + [output_filename] + objects + self.objects) - - # Not many Unices required ranlib anymore -- SunOS 4.x is, I - # think the only major Unix that does. Maybe we need some - # platform intelligence here to skip ranlib if it's not - # needed -- or maybe Python's configure script took care of - # it for us, hence the check for leading colon. - if self.ranlib: - try: - self.spawn(self.ranlib + [output_filename]) - except DistutilsExecError as msg: - raise LibError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - def link( - self, - target_desc, - objects, - output_filename, - output_dir=None, - libraries=None, - library_dirs=None, - runtime_library_dirs=None, - export_symbols=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - build_temp=None, - target_lang=None, - ): - objects, output_dir = self._fix_object_args(objects, output_dir) - fixed_args = self._fix_lib_args(libraries, library_dirs, runtime_library_dirs) - libraries, library_dirs, runtime_library_dirs = fixed_args - - lib_opts = gen_lib_options(self, library_dirs, runtime_library_dirs, libraries) - if not isinstance(output_dir, (str, type(None))): - raise TypeError("'output_dir' must be a string or None") - if output_dir is not None: - output_filename = os.path.join(output_dir, output_filename) - - if self._need_link(objects, output_filename): - ld_args = objects + self.objects + lib_opts + ['-o', output_filename] - if debug: - ld_args[:0] = ['-g'] - if extra_preargs: - ld_args[:0] = extra_preargs - if extra_postargs: - ld_args.extend(extra_postargs) - self.mkpath(os.path.dirname(output_filename)) - try: - # Select a linker based on context: linker_exe when - # building an executable or linker_so (with shared options) - # when building a shared library. - building_exe = target_desc == CCompiler.EXECUTABLE - linker = (self.linker_exe if building_exe else self.linker_so)[:] - - if target_lang == "c++" and self.compiler_cxx: - env, linker_ne = _split_env(linker) - aix, linker_na = _split_aix(linker_ne) - _, compiler_cxx_ne = _split_env(self.compiler_cxx) - _, linker_exe_ne = _split_env(self.linker_exe) - - params = _linker_params(linker_na, linker_exe_ne) - linker = env + aix + compiler_cxx_ne + params - - linker = compiler_fixup(linker, ld_args) - - self.spawn(linker + ld_args) - except DistutilsExecError as msg: - raise LinkError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - # -- Miscellaneous methods ----------------------------------------- - # These are all used by the 'gen_lib_options() function, in - # ccompiler.py. - - def library_dir_option(self, dir): - return "-L" + dir - - def _is_gcc(self): - cc_var = sysconfig.get_config_var("CC") - compiler = os.path.basename(shlex.split(cc_var)[0]) - return "gcc" in compiler or "g++" in compiler - - def runtime_library_dir_option(self, dir): - # XXX Hackish, at the very least. See Python bug #445902: - # http://sourceforge.net/tracker/index.php - # ?func=detail&aid=445902&group_id=5470&atid=105470 - # Linkers on different platforms need different options to - # specify that directories need to be added to the list of - # directories searched for dependencies when a dynamic library - # is sought. GCC on GNU systems (Linux, FreeBSD, ...) has to - # be told to pass the -R option through to the linker, whereas - # other compilers and gcc on other systems just know this. - # Other compilers may need something slightly different. At - # this time, there's no way to determine this information from - # the configuration data stored in the Python installation, so - # we use this hack. - if sys.platform[:6] == "darwin": - from distutils.util import get_macosx_target_ver, split_version - - macosx_target_ver = get_macosx_target_ver() - if macosx_target_ver and split_version(macosx_target_ver) >= [10, 5]: - return "-Wl,-rpath," + dir - else: # no support for -rpath on earlier macOS versions - return "-L" + dir - elif sys.platform[:7] == "freebsd": - return "-Wl,-rpath=" + dir - elif sys.platform[:5] == "hp-ux": - return [ - "-Wl,+s" if self._is_gcc() else "+s", - "-L" + dir, - ] - - # For all compilers, `-Wl` is the presumed way to - # pass a compiler option to the linker and `-R` is - # the way to pass an RPATH. - if sysconfig.get_config_var("GNULD") == "yes": - # GNU ld needs an extra option to get a RUNPATH - # instead of just an RPATH. - return "-Wl,--enable-new-dtags,-R" + dir - else: - return "-Wl,-R" + dir - - def library_option(self, lib): - return "-l" + lib - - @staticmethod - def _library_root(dir): - """ - macOS users can specify an alternate SDK using'-isysroot'. - Calculate the SDK root if it is specified. - - Note that, as of Xcode 7, Apple SDKs may contain textual stub - libraries with .tbd extensions rather than the normal .dylib - shared libraries installed in /. The Apple compiler tool - chain handles this transparently but it can cause problems - for programs that are being built with an SDK and searching - for specific libraries. Callers of find_library_file need to - keep in mind that the base filename of the returned SDK library - file might have a different extension from that of the library - file installed on the running system, for example: - /Applications/Xcode.app/Contents/Developer/Platforms/ - MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/ - usr/lib/libedit.tbd - vs - /usr/lib/libedit.dylib - """ - cflags = sysconfig.get_config_var('CFLAGS') - match = re.search(r'-isysroot\s*(\S+)', cflags) - - apply_root = ( - sys.platform == 'darwin' - and match - and ( - dir.startswith('/System/') - or (dir.startswith('/usr/') and not dir.startswith('/usr/local/')) - ) - ) - - return os.path.join(match.group(1), dir[1:]) if apply_root else dir - - def find_library_file(self, dirs, lib, debug=0): - r""" - Second-guess the linker with not much hard - data to go on: GCC seems to prefer the shared library, so - assume that *all* Unix C compilers do, - ignoring even GCC's "-static" option. - - >>> compiler = UnixCCompiler() - >>> compiler._library_root = lambda dir: dir - >>> monkeypatch = getfixture('monkeypatch') - >>> monkeypatch.setattr(os.path, 'exists', lambda d: 'existing' in d) - >>> dirs = ('/foo/bar/missing', '/foo/bar/existing') - >>> compiler.find_library_file(dirs, 'abc').replace('\\', '/') - '/foo/bar/existing/libabc.dylib' - >>> compiler.find_library_file(reversed(dirs), 'abc').replace('\\', '/') - '/foo/bar/existing/libabc.dylib' - >>> monkeypatch.setattr(os.path, 'exists', - ... lambda d: 'existing' in d and '.a' in d) - >>> compiler.find_library_file(dirs, 'abc').replace('\\', '/') - '/foo/bar/existing/libabc.a' - >>> compiler.find_library_file(reversed(dirs), 'abc').replace('\\', '/') - '/foo/bar/existing/libabc.a' - """ - lib_names = ( - self.library_filename(lib, lib_type=type) - for type in 'dylib xcode_stub shared static'.split() - ) - - roots = map(self._library_root, dirs) - - searched = ( - os.path.join(root, lib_name) - for root, lib_name in itertools.product(roots, lib_names) - ) - - found = filter(os.path.exists, searched) - - # Return None if it could not be found in any dir. - return next(found, None) diff --git a/spaces/Realcat/image-matching-webui/third_party/SGMNet/train/train_sgm.sh b/spaces/Realcat/image-matching-webui/third_party/SGMNet/train/train_sgm.sh deleted file mode 100644 index f82704e04746ec3353ae2e39f727b55fc072043b..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/SGMNet/train/train_sgm.sh +++ /dev/null @@ -1,10 +0,0 @@ -OMP_NUM_THREADS=2 CUDA_VISIBLE_DEVICES='0' python -m torch.distributed.launch --nproc_per_node=1 --master_port 23003 main.py \ ---model_name=SGM \ ---config_path=configs/sgm.yaml \ ---rawdata_path=rawdata \ ---desc_path=desc_path \ ---desc_suffix=_root_1000.hdf5 \ ---dataset_path=dataset_path \ ---log_base=log_root_1k_sgm \ ---num_kpt=1000 \ ---train_iter=900000 \ No newline at end of file diff --git a/spaces/Redgon/bingo/src/components/theme-toggle.tsx b/spaces/Redgon/bingo/src/components/theme-toggle.tsx deleted file mode 100644 index 67d3f1a2c163ccbeb52c40a7e42f107190237154..0000000000000000000000000000000000000000 --- a/spaces/Redgon/bingo/src/components/theme-toggle.tsx +++ /dev/null @@ -1,31 +0,0 @@ -'use client' - -import * as React from 'react' -import { useTheme } from 'next-themes' - -import { Button } from '@/components/ui/button' -import { IconMoon, IconSun } from '@/components/ui/icons' - -export function ThemeToggle() { - const { setTheme, theme } = useTheme() - const [_, startTransition] = React.useTransition() - - return ( - - ) -} diff --git a/spaces/RitaParadaRamos/SmallCapDemo/utils.py b/spaces/RitaParadaRamos/SmallCapDemo/utils.py deleted file mode 100644 index dc2f3b70261ef1e4346c8c7990ceed6441020b57..0000000000000000000000000000000000000000 --- a/spaces/RitaParadaRamos/SmallCapDemo/utils.py +++ /dev/null @@ -1,131 +0,0 @@ -from torch.utils.data import Dataset -from PIL import Image -import torch -import json -import h5py -import bisect - -CAPTION_LENGTH = 25 -SIMPLE_PREFIX = "This image shows " - -def prep_strings(text, tokenizer, template=None, retrieved_caps=None, k=None, is_test=False, max_length=None): - - if is_test: - padding = False - truncation = False - else: - padding = True - truncation = True - - if retrieved_caps is not None: - infix = '\n\n'.join(retrieved_caps[:k]) + '.' - prefix = template.replace('||', infix) - else: - prefix = SIMPLE_PREFIX - - prefix_ids = tokenizer.encode(prefix) - len_prefix = len(prefix_ids) - - text_ids = tokenizer.encode(text, add_special_tokens=False) - if truncation: - text_ids = text_ids[:CAPTION_LENGTH] - input_ids = prefix_ids + text_ids if not is_test else prefix_ids - - # we ignore the prefix (minus one as the first subtoken in the prefix is not predicted) - label_ids = [-100] * (len_prefix - 1) + text_ids + [tokenizer.eos_token_id] - if padding: - input_ids += [tokenizer.pad_token_id] * (max_length - len(input_ids)) - label_ids += [-100] * (max_length - len(label_ids)) - - if is_test: - return input_ids - else: - return input_ids, label_ids - -def postprocess_preds(pred, tokenizer): - pred = pred.split(SIMPLE_PREFIX)[-1] - pred = pred.replace(tokenizer.pad_token, '') - if pred.startswith(tokenizer.bos_token): - pred = pred[len(tokenizer.bos_token):] - if pred.endswith(tokenizer.eos_token): - pred = pred[:-len(tokenizer.eos_token)] - return pred - -class TrainDataset(Dataset): - def __init__(self, df, features_path, tokenizer, rag=False, template_path=None, k=None, max_caption_length=25): - self.df = df - self.tokenizer = tokenizer - self.features = h5py.File(features_path, 'r') - - if rag: - self.template = open(template_path).read().strip() + ' ' - self.max_target_length = (max_caption_length # target caption - + max_caption_length * k # retrieved captions - + len(tokenizer.encode(self.template)) # template - + len(tokenizer.encode('\n\n')) * (k-1) # separator between captions - ) - assert k is not None - self.k = k - self.rag = rag - - def __len__(self): - return len(self.df) - - def __getitem__(self, idx): - text = self.df['text'][idx] - if self.rag: - caps = self.df['caps'][idx] - decoder_input_ids, labels = prep_strings(text, self.tokenizer, template=self.template, - retrieved_caps=caps, k=self.k, max_length=self.max_target_length) - else: - decoder_input_ids, labels = prep_strings(text, self.tokenizer, max_length=self.max_target_length) - # load precomputed features - encoder_outputs = self.features[self.df['cocoid'][idx]][()] - encoding = {"encoder_outputs": torch.tensor(encoder_outputs), - "decoder_input_ids": torch.tensor(decoder_input_ids), - "labels": torch.tensor(labels)} - - return encoding - - -def load_data_for_training(annot_path, caps_path=None): - annotations = json.load(open(annot_path))['images'] - if caps_path is not None: - retrieved_caps = json.load(open(caps_path)) - data = {'train': [], 'val': []} - - for item in annotations: - file_name = item['filename'].split('_')[-1] - if caps_path is not None: - caps = retrieved_caps[str(item['cocoid'])] - else: - caps = None - samples = [] - for sentence in item['sentences']: - samples.append({'file_name': file_name, 'cocoid': str(item['cocoid']), 'caps': caps, 'text': ' '.join(sentence['tokens'])}) - if item['split'] == 'train' or item['split'] == 'restval': - data['train'] += samples - elif item['split'] == 'val': - data['val'] += samples - return data - -def load_data_for_inference(annot_path, caps_path=None): - annotations = json.load(open(annot_path))['images'] - if caps_path is not None: - retrieved_caps = json.load(open(caps_path)) - data = {'test': [], 'val': []} - - for item in annotations: - file_name = item['filename'].split('_')[-1] - if caps_path is not None: - caps = retrieved_caps[str(item['cocoid'])] - else: - caps = None - image = {'file_name': file_name, 'caps': caps, 'image_id': str(item['cocoid'])} - if item['split'] == 'test': - data['test'].append(image) - elif item['split'] == 'val': - data['val'].append(image) - - return data - diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/utils/timer.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/utils/timer.py deleted file mode 100644 index e3db7d497d8b374e18b5297e0a1d6eb186fd8cba..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/utils/timer.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from time import time - - -class TimerError(Exception): - - def __init__(self, message): - self.message = message - super(TimerError, self).__init__(message) - - -class Timer: - """A flexible Timer class. - - :Example: - - >>> import time - >>> import annotator.uniformer.mmcv as mmcv - >>> with mmcv.Timer(): - >>> # simulate a code block that will run for 1s - >>> time.sleep(1) - 1.000 - >>> with mmcv.Timer(print_tmpl='it takes {:.1f} seconds'): - >>> # simulate a code block that will run for 1s - >>> time.sleep(1) - it takes 1.0 seconds - >>> timer = mmcv.Timer() - >>> time.sleep(0.5) - >>> print(timer.since_start()) - 0.500 - >>> time.sleep(0.5) - >>> print(timer.since_last_check()) - 0.500 - >>> print(timer.since_start()) - 1.000 - """ - - def __init__(self, start=True, print_tmpl=None): - self._is_running = False - self.print_tmpl = print_tmpl if print_tmpl else '{:.3f}' - if start: - self.start() - - @property - def is_running(self): - """bool: indicate whether the timer is running""" - return self._is_running - - def __enter__(self): - self.start() - return self - - def __exit__(self, type, value, traceback): - print(self.print_tmpl.format(self.since_last_check())) - self._is_running = False - - def start(self): - """Start the timer.""" - if not self._is_running: - self._t_start = time() - self._is_running = True - self._t_last = time() - - def since_start(self): - """Total time since the timer is started. - - Returns (float): Time in seconds. - """ - if not self._is_running: - raise TimerError('timer is not running') - self._t_last = time() - return self._t_last - self._t_start - - def since_last_check(self): - """Time since the last checking. - - Either :func:`since_start` or :func:`since_last_check` is a checking - operation. - - Returns (float): Time in seconds. - """ - if not self._is_running: - raise TimerError('timer is not running') - dur = time() - self._t_last - self._t_last = time() - return dur - - -_g_timers = {} # global timers - - -def check_time(timer_id): - """Add check points in a single line. - - This method is suitable for running a task on a list of items. A timer will - be registered when the method is called for the first time. - - :Example: - - >>> import time - >>> import annotator.uniformer.mmcv as mmcv - >>> for i in range(1, 6): - >>> # simulate a code block - >>> time.sleep(i) - >>> mmcv.check_time('task1') - 2.000 - 3.000 - 4.000 - 5.000 - - Args: - timer_id (str): Timer identifier. - """ - if timer_id not in _g_timers: - _g_timers[timer_id] = Timer() - return 0 - else: - return _g_timers[timer_id].since_last_check() diff --git a/spaces/Rohit001/emotion_detection/README.md b/spaces/Rohit001/emotion_detection/README.md deleted file mode 100644 index fbda5cd68442044a7bb621149480a703dca08d61..0000000000000000000000000000000000000000 --- a/spaces/Rohit001/emotion_detection/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Emotion Detection -emoji: 🏆 -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: cc ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Rongjiehuang/GenerSpeech/tasks/tts/dataset_utils.py b/spaces/Rongjiehuang/GenerSpeech/tasks/tts/dataset_utils.py deleted file mode 100644 index ea948e4166a8dd0010fe179400b6c7c2e07406a7..0000000000000000000000000000000000000000 --- a/spaces/Rongjiehuang/GenerSpeech/tasks/tts/dataset_utils.py +++ /dev/null @@ -1,260 +0,0 @@ -from utils.cwt import get_lf0_cwt -import torch.optim -import torch.utils.data -import importlib -from utils.indexed_datasets import IndexedDataset -from utils.pitch_utils import norm_interp_f0, denorm_f0, f0_to_coarse -import numpy as np -from tasks.base_task import BaseDataset -import torch -import torch.optim -import torch.utils.data -import utils -import torch.distributions -from utils.hparams import hparams -from utils.pitch_utils import norm_interp_f0 -from resemblyzer import VoiceEncoder -import json -from data_gen.tts.data_gen_utils import build_phone_encoder - -class BaseTTSDataset(BaseDataset): - def __init__(self, prefix, shuffle=False, test_items=None, test_sizes=None, data_dir=None): - super().__init__(shuffle) - self.data_dir = hparams['binary_data_dir'] if data_dir is None else data_dir - self.prefix = prefix - self.hparams = hparams - self.indexed_ds = None - self.ext_mel2ph = None - - def load_size(): - self.sizes = np.load(f'{self.data_dir}/{self.prefix}_lengths.npy') - - if prefix == 'test': - if test_items is not None: - self.indexed_ds, self.sizes = test_items, test_sizes - else: - load_size() - if hparams['num_test_samples'] > 0: - self.avail_idxs = [x for x in range(hparams['num_test_samples']) \ - if x < len(self.sizes)] - if len(hparams['test_ids']) > 0: - self.avail_idxs = hparams['test_ids'] + self.avail_idxs - else: - self.avail_idxs = list(range(len(self.sizes))) - else: - load_size() - self.avail_idxs = list(range(len(self.sizes))) - - if hparams['min_frames'] > 0: - self.avail_idxs = [ - x for x in self.avail_idxs if self.sizes[x] >= hparams['min_frames']] - self.sizes = [self.sizes[i] for i in self.avail_idxs] - - def _get_item(self, index): - if hasattr(self, 'avail_idxs') and self.avail_idxs is not None: - index = self.avail_idxs[index] - if self.indexed_ds is None: - self.indexed_ds = IndexedDataset(f'{self.data_dir}/{self.prefix}') - return self.indexed_ds[index] - - def __getitem__(self, index): - hparams = self.hparams - item = self._get_item(index) - assert len(item['mel']) == self.sizes[index], (len(item['mel']), self.sizes[index]) - max_frames = hparams['max_frames'] - spec = torch.Tensor(item['mel'])[:max_frames] - max_frames = spec.shape[0] // hparams['frames_multiple'] * hparams['frames_multiple'] - spec = spec[:max_frames] - phone = torch.LongTensor(item['phone'][:hparams['max_input_tokens']]) - sample = { - "id": index, - "item_name": item['item_name'], - "text": item['txt'], - "txt_token": phone, - "mel": spec, - "mel_nonpadding": spec.abs().sum(-1) > 0, - } - if hparams['use_spk_embed']: - sample["spk_embed"] = torch.Tensor(item['spk_embed']) - if hparams['use_spk_id']: - sample["spk_id"] = item['spk_id'] - return sample - - def collater(self, samples): - if len(samples) == 0: - return {} - hparams = self.hparams - id = torch.LongTensor([s['id'] for s in samples]) - item_names = [s['item_name'] for s in samples] - text = [s['text'] for s in samples] - txt_tokens = utils.collate_1d([s['txt_token'] for s in samples], 0) - mels = utils.collate_2d([s['mel'] for s in samples], 0.0) - txt_lengths = torch.LongTensor([s['txt_token'].numel() for s in samples]) - mel_lengths = torch.LongTensor([s['mel'].shape[0] for s in samples]) - - batch = { - 'id': id, - 'item_name': item_names, - 'nsamples': len(samples), - 'text': text, - 'txt_tokens': txt_tokens, - 'txt_lengths': txt_lengths, - 'mels': mels, - 'mel_lengths': mel_lengths, - } - - if hparams['use_spk_embed']: - spk_embed = torch.stack([s['spk_embed'] for s in samples]) - batch['spk_embed'] = spk_embed - if hparams['use_spk_id']: - spk_ids = torch.LongTensor([s['spk_id'] for s in samples]) - batch['spk_ids'] = spk_ids - return batch - - -class FastSpeechDataset(BaseTTSDataset): - def __init__(self, prefix, shuffle=False, test_items=None, test_sizes=None, data_dir=None): - super().__init__(prefix, shuffle, test_items, test_sizes, data_dir) - self.f0_mean, self.f0_std = hparams.get('f0_mean', None), hparams.get('f0_std', None) - if prefix == 'test' and hparams['test_input_dir'] != '': - self.data_dir = hparams['test_input_dir'] - self.indexed_ds = IndexedDataset(f'{self.data_dir}/{self.prefix}') - self.indexed_ds = sorted(self.indexed_ds, key=lambda item: item['item_name']) - items = {} - for i in range(len(self.indexed_ds)): - speaker = self.indexed_ds[i]['item_name'].split('_')[0] - if speaker not in items.keys(): - items[speaker] = [i] - else: - items[speaker].append(i) - sort_item = sorted(items.values(), key=lambda item_pre_speaker: len(item_pre_speaker), reverse=True) - self.avail_idxs = [n for a in sort_item for n in a][:hparams['num_test_samples']] - self.indexed_ds, self.sizes = self.load_test_inputs() - self.avail_idxs = [i for i in range(hparams['num_test_samples'])] - - if hparams['pitch_type'] == 'cwt': - _, hparams['cwt_scales'] = get_lf0_cwt(np.ones(10)) - - def __getitem__(self, index): - sample = super(FastSpeechDataset, self).__getitem__(index) - item = self._get_item(index) - hparams = self.hparams - max_frames = hparams['max_frames'] - spec = sample['mel'] - T = spec.shape[0] - phone = sample['txt_token'] - sample['energy'] = (spec.exp() ** 2).sum(-1).sqrt() - sample['mel2ph'] = mel2ph = torch.LongTensor(item['mel2ph'])[:T] if 'mel2ph' in item else None - if hparams['use_pitch_embed']: - assert 'f0' in item - if hparams.get('normalize_pitch', False): - f0 = item["f0"] - if len(f0 > 0) > 0 and f0[f0 > 0].std() > 0: - f0[f0 > 0] = (f0[f0 > 0] - f0[f0 > 0].mean()) / f0[f0 > 0].std() * hparams['f0_std'] + \ - hparams['f0_mean'] - f0[f0 > 0] = f0[f0 > 0].clip(min=60, max=500) - pitch = f0_to_coarse(f0) - pitch = torch.LongTensor(pitch[:max_frames]) - else: - pitch = torch.LongTensor(item.get("pitch"))[:max_frames] if "pitch" in item else None - f0, uv = norm_interp_f0(item["f0"][:max_frames], hparams) - uv = torch.FloatTensor(uv) - f0 = torch.FloatTensor(f0) - if hparams['pitch_type'] == 'cwt': - cwt_spec = torch.Tensor(item['cwt_spec'])[:max_frames] - f0_mean = item.get('f0_mean', item.get('cwt_mean')) - f0_std = item.get('f0_std', item.get('cwt_std')) - sample.update({"cwt_spec": cwt_spec, "f0_mean": f0_mean, "f0_std": f0_std}) - elif hparams['pitch_type'] == 'ph': - if "f0_ph" in item: - f0 = torch.FloatTensor(item['f0_ph']) - else: - f0 = denorm_f0(f0, None, hparams) - f0_phlevel_sum = torch.zeros_like(phone).float().scatter_add(0, mel2ph - 1, f0) - f0_phlevel_num = torch.zeros_like(phone).float().scatter_add( - 0, mel2ph - 1, torch.ones_like(f0)).clamp_min(1) - f0_ph = f0_phlevel_sum / f0_phlevel_num - f0, uv = norm_interp_f0(f0_ph, hparams) - else: - f0 = uv = torch.zeros_like(mel2ph) - pitch = None - sample["f0"], sample["uv"], sample["pitch"] = f0, uv, pitch - if hparams['use_spk_embed']: - sample["spk_embed"] = torch.Tensor(item['spk_embed']) - if hparams['use_spk_id']: - sample["spk_id"] = item['spk_id'] - return sample - - def collater(self, samples): - if len(samples) == 0: - return {} - hparams = self.hparams - batch = super(FastSpeechDataset, self).collater(samples) - f0 = utils.collate_1d([s['f0'] for s in samples], 0.0) - pitch = utils.collate_1d([s['pitch'] for s in samples]) if samples[0]['pitch'] is not None else None - uv = utils.collate_1d([s['uv'] for s in samples]) - energy = utils.collate_1d([s['energy'] for s in samples], 0.0) - mel2ph = utils.collate_1d([s['mel2ph'] for s in samples], 0.0) \ - if samples[0]['mel2ph'] is not None else None - batch.update({ - 'mel2ph': mel2ph, - 'energy': energy, - 'pitch': pitch, - 'f0': f0, - 'uv': uv, - }) - if hparams['pitch_type'] == 'cwt': - cwt_spec = utils.collate_2d([s['cwt_spec'] for s in samples]) - f0_mean = torch.Tensor([s['f0_mean'] for s in samples]) - f0_std = torch.Tensor([s['f0_std'] for s in samples]) - batch.update({'cwt_spec': cwt_spec, 'f0_mean': f0_mean, 'f0_std': f0_std}) - return batch - - def load_test_inputs(self): - binarizer_cls = hparams.get("binarizer_cls", 'data_gen.tts.base_binarizerr.BaseBinarizer') - pkg = ".".join(binarizer_cls.split(".")[:-1]) - cls_name = binarizer_cls.split(".")[-1] - binarizer_cls = getattr(importlib.import_module(pkg), cls_name) - ph_set_fn = f"{hparams['binary_data_dir']}/phone_set.json" - ph_set = json.load(open(ph_set_fn, 'r')) - print("| phone set: ", ph_set) - phone_encoder = build_phone_encoder(hparams['binary_data_dir']) - word_encoder = None - voice_encoder = VoiceEncoder().cuda() - encoder = [phone_encoder, word_encoder] - sizes = [] - items = [] - for i in range(len(self.avail_idxs)): - item = self._get_item(i) - - item2tgfn = f"{hparams['test_input_dir'].replace('binary', 'processed')}/mfa_outputs/{item['item_name']}.TextGrid" - item = binarizer_cls.process_item(item['item_name'], item['ph'], item['txt'], item2tgfn, - item['wav_fn'], item['spk_id'], encoder, hparams['binarization_args']) - item['spk_embed'] = voice_encoder.embed_utterance(item['wav']) \ - if hparams['binarization_args']['with_spk_embed'] else None # 判断是否保存embedding文件 - items.append(item) - sizes.append(item['len']) - return items, sizes - -class FastSpeechWordDataset(FastSpeechDataset): - def __getitem__(self, index): - sample = super(FastSpeechWordDataset, self).__getitem__(index) - item = self._get_item(index) - max_frames = hparams['max_frames'] - sample["ph_words"] = item["ph_words"] - sample["word_tokens"] = torch.LongTensor(item["word_tokens"]) - sample["mel2word"] = torch.LongTensor(item.get("mel2word"))[:max_frames] - sample["ph2word"] = torch.LongTensor(item['ph2word'][:hparams['max_input_tokens']]) - return sample - - def collater(self, samples): - batch = super(FastSpeechWordDataset, self).collater(samples) - ph_words = [s['ph_words'] for s in samples] - batch['ph_words'] = ph_words - word_tokens = utils.collate_1d([s['word_tokens'] for s in samples], 0) - batch['word_tokens'] = word_tokens - mel2word = utils.collate_1d([s['mel2word'] for s in samples], 0) - batch['mel2word'] = mel2word - ph2word = utils.collate_1d([s['ph2word'] for s in samples], 0) - batch['ph2word'] = ph2word - return batch diff --git a/spaces/Rowanchav/anything-v3.0/utils.py b/spaces/Rowanchav/anything-v3.0/utils.py deleted file mode 100644 index ff1c065d186347ca51b47d010a697dbe1814695c..0000000000000000000000000000000000000000 --- a/spaces/Rowanchav/anything-v3.0/utils.py +++ /dev/null @@ -1,6 +0,0 @@ -def is_google_colab(): - try: - import google.colab - return True - except: - return False \ No newline at end of file diff --git a/spaces/Sandiago21/automatic-speech-recognition-greek/README.md b/spaces/Sandiago21/automatic-speech-recognition-greek/README.md deleted file mode 100644 index 07ac9e3bb40108407ada8df0942379f7175d0bca..0000000000000000000000000000000000000000 --- a/spaces/Sandiago21/automatic-speech-recognition-greek/README.md +++ /dev/null @@ -1,6 +0,0 @@ ---- -title: automatic-speech-recognition-greek -app_file: app.py -sdk: gradio -sdk_version: 3.36.0 ---- diff --git a/spaces/Seogmin/NLP/README.md b/spaces/Seogmin/NLP/README.md deleted file mode 100644 index c128c3cb25aab0470dc15f4a3d80f18e639ec64d..0000000000000000000000000000000000000000 --- a/spaces/Seogmin/NLP/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: NLP -emoji: 🐠 -colorFrom: pink -colorTo: purple -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ServerX/PorcoDiaz/diffq/uniform.py b/spaces/ServerX/PorcoDiaz/diffq/uniform.py deleted file mode 100644 index f61e9129c04caaa33c66f726bf2433d51689cfa5..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/diffq/uniform.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Classic uniform quantization over n bits. -""" -from typing import Tuple -import torch - -from .base import BaseQuantizer -from .utils import simple_repr - - -def uniform_quantize(p: torch.Tensor, bits: torch.Tensor = torch.tensor(8.)): - """ - Quantize the given weights over `bits` bits. - - Returns: - - quantized levels - - (min, max) range. - - """ - assert (bits >= 1).all() and (bits <= 15).all() - num_levels = (2 ** bits.float()).long() - mn = p.min().item() - mx = p.max().item() - p = (p - mn) / (mx - mn) # put p in [0, 1] - unit = 1 / (num_levels - 1) # quantization unit - levels = (p / unit).round() - if (bits <= 8).all(): - levels = levels.byte() - else: - levels = levels.short() - return levels, (mn, mx) - - -def uniform_unquantize(levels: torch.Tensor, scales: Tuple[float, float], - bits: torch.Tensor = torch.tensor(8.)): - """ - Unquantize the weights from the levels and scale. Return a float32 tensor. - """ - mn, mx = scales - num_levels = 2 ** bits.float() - unit = 1 / (num_levels - 1) - levels = levels.float() - p = levels * unit # in [0, 1] - return p * (mx - mn) + mn - - -class UniformQuantizer(BaseQuantizer): - def __init__(self, model: torch.nn.Module, bits: float = 8., min_size: float = 0.01, - float16: bool = False, qat: bool = False, exclude=[], detect_bound=True): - """ - Args: - model (torch.nn.Module): model to quantize - bits (float): number of bits to quantize over. - min_size (float): minimum size in MB of a parameter to be quantized. - float16 (bool): if a layer is smaller than min_size, should we still do float16? - qat (bool): perform quantized aware training. - exclude (list[str]): list of patterns used to match parameters to exclude. - For instance `['bias']` to exclude all bias terms. - detect_bound (bool): if True, will detect bound parameters and reuse - the same quantized tensor for both. - """ - self.bits = float(bits) - self.qat = qat - - super().__init__(model, min_size, float16, exclude, detect_bound) - - def __repr__(self): - return simple_repr(self, ) - - def _pre_forward_train(self): - if self.qat: - for qparam in self._qparams: - if qparam.other is not None: - new_param = qparam.other.module._parameters[qparam.other.name] - else: - quantized = self._quantize_param(qparam) - qvalue = self._unquantize_param(qparam, quantized) - new_param = qparam.param + (qvalue - qparam.param).detach() - qparam.module._parameters[qparam.name] = new_param - return True - return False - - def _post_forward_train(self): - if self.qat: - for qparam in self._qparams: - qparam.module._parameters[qparam.name] = qparam.param - return True - return False - - def _quantize_param(self, qparam): - levels, scales = uniform_quantize(qparam.param.data, torch.tensor(self.bits)) - return (levels, scales) - - def _unquantize_param(self, qparam, quantized): - levels, scales = quantized - return uniform_unquantize(levels, scales, torch.tensor(self.bits)) - - def model_size(self): - """ - Non differentiable model size in MB. - """ - total = super().model_size() - subtotal = 0 - for qparam in self._qparams: - if qparam.other is None: # if parameter is bound, count only one copy. - subtotal += self.bits * qparam.param.numel() + 64 # 2 float for the overall scales - subtotal /= 2**20 * 8 # bits to MegaBytes - return total + subtotal - - def true_model_size(self): - """ - Return the true quantized model size, in MB, without extra - compression. - """ - return self.model_size().item() diff --git a/spaces/SeyedAli/Image-Similarity/app.py b/spaces/SeyedAli/Image-Similarity/app.py deleted file mode 100644 index 646c93c738b6a8b45b3b3d3b5fea481fe56b9088..0000000000000000000000000000000000000000 --- a/spaces/SeyedAli/Image-Similarity/app.py +++ /dev/null @@ -1,32 +0,0 @@ -import gradio as gr -import os -import random -from src.model import simlarity_model as model -from src.similarity.similarity import Similarity - -similarity = Similarity() -models = similarity.get_models() - -def check(img_main, img_1, img_2, model_idx): - result = similarity.check_similarity([img_main, img_1, img_2], models[model_idx]) - return result - -with gr.Blocks() as demo: - gr.Markdown('بررسی شباهت عکس ها') - img_main = gr.Text(label='عکس اصلی', placeholder='https://myimage.jpg') - - gr.Markdown('عکس های مقایسه') - img_1 = gr.Text(label='عکس اول', placeholder='https://myimage_1.jpg') - img_2 = gr.Text(label='عکس دوم', placeholder='https://myimage_2.jpg') - - gr.Markdown('انتخاب مدل') - model = gr.Dropdown([m.name for m in models], label='مدل', type='index') - - gallery = gr.Gallery( - label="عکس های تولیدی", show_label=False, elem_id="gallery" - ).style(grid=[2], height="auto") - - submit_btn = gr.Button('بررسی شباهت') - submit_btn.click(fn=check,inputs=[img_main, img_1, img_2, model], outputs=gallery) - -demo.launch() \ No newline at end of file diff --git a/spaces/ShkShahid/Auto-encoder_For_Image_Reconstruction/hdrcnn_predict.py b/spaces/ShkShahid/Auto-encoder_For_Image_Reconstruction/hdrcnn_predict.py deleted file mode 100644 index b06b758270adc0f77f0e60bd61c648b4a614cd63..0000000000000000000000000000000000000000 --- a/spaces/ShkShahid/Auto-encoder_For_Image_Reconstruction/hdrcnn_predict.py +++ /dev/null @@ -1,146 +0,0 @@ - - -import os, sys -import tensorflow as tf -import tensorlayer as tl -import numpy as np -import network, img_io - -import time - - -eps = 1e-5 - -def print_(str, color='', bold=False): - if color == 'w': - sys.stdout.write('\033[93m') - elif color == "e": - sys.stdout.write('\033[91m') - elif color == "m": - sys.stdout.write('\033[95m') - - if bold: - sys.stdout.write('\033[1m') - - sys.stdout.write(str) - sys.stdout.write('\033[0m') - sys.stdout.flush() - - -# Settings, using TensorFlow arguments -FLAGS = tf.flags.FLAGS -tf.flags.DEFINE_integer("width", "1024", "Reconstruction image width") -tf.flags.DEFINE_integer("height", "768", "Reconstruction image height") -tf.flags.DEFINE_string("im_dir", "Input_Dir", "Path to image directory or an individual image") -tf.flags.DEFINE_string("out_dir", "Output_Dir", "Path to output directory") -tf.flags.DEFINE_string("params", "hdrcnn_params.npz", "Path to trained CNN weights") -tf.flags.DEFINE_float("scaling", "1.0", "Pre-scaling, which is followed by clipping, in order to remove compression artifacts close to highlights") -tf.flags.DEFINE_float("gamma", "1.0", "Gamma/exponential curve applied before, and inverted after, prediction. This can be used to control the boost of reconstructed pixels.") - -# Round to be multiple of 32, so that autoencoder pooling+upsampling -# yields same size as input image -sx = int(np.maximum(32, np.round(FLAGS.width/32.0)*32)) -sy = int(np.maximum(32, np.round(FLAGS.height/32.0)*32)) -if sx != FLAGS.width or sy != FLAGS.height: - print_("Warning: ", 'w', True) - print_("prediction size has been changed from %dx%d pixels to %dx%d\n"%(FLAGS.width, FLAGS.height, sx, sy), 'w') - print_(" pixels, to comply with autoencoder pooling and up-sampling.\n\n", 'w') - -# Info -print_("\n\n\t-------------------------------------------------------------------\n", 'm') -print_("\t HDR image reconstruction from a single exposure using deep CNNs\n\n", 'm') -print_("\t Prediction settings\n", 'm') -print_("\t -------------------\n", 'm') -print_("\t Input image directory/file: %s\n" % FLAGS.im_dir, 'm') -print_("\t Output directory: %s\n" % FLAGS.out_dir, 'm') -print_("\t CNN weights: %s\n" % FLAGS.params, 'm') -print_("\t Prediction resolution: %dx%d pixels\n" % (sx, sy), 'm') -if FLAGS.scaling > 1.0: - print_("\t Pre-scaling: %0.4f\n" % FLAGS.scaling, 'm') -if FLAGS.gamma > 1.0 + eps or FLAGS.gamma < 1.0 - eps: - print_("\t Gamma: %0.4f\n" % FLAGS.gamma, 'm') -print_("\t-------------------------------------------------------------------\n\n\n", 'm') - -# Single frame -frames = [FLAGS.im_dir] - -# If directory is supplied, get names of all files in the path -if os.path.isdir(FLAGS.im_dir): - frames = [os.path.join(FLAGS.im_dir, name) - for name in sorted(os.listdir(FLAGS.im_dir)) - if os.path.isfile(os.path.join(FLAGS.im_dir, name))] - -# Placeholder for image input -x = tf.placeholder(tf.float32, shape=[1, sy, sx, 3]) - -# HDR reconstruction autoencoder model -print_("Network setup:\n") -net = network.model(x) - -# The CNN prediction (this also includes blending with input image x) -y = network.get_final(net, x) - -# TensorFlow session for running inference -sess = tf.InteractiveSession() - -# Load trained CNN weights -print_("\nLoading trained parameters from '%s'..."%FLAGS.params) -load_params = tl.files.load_npz(name=FLAGS.params) -tl.files.assign_params(sess, load_params, net) -print_("\tdone\n") - -if not os.path.exists(FLAGS.out_dir): - os.makedirs(FLAGS.out_dir) - -print_("\nStarting prediction...\n\n") -k = 0 -for i in range(len(frames)): - print("Frame %d: '%s'"%(i,frames[i])) - - try: - # Read frame - print_("\tReading...") - x_buffer = img_io.readLDR(frames[i], (sy,sx), True, FLAGS.scaling) - print_("\tdone") - - print_("\t(Saturation: %0.2f%%)\n" % (100.0*(x_buffer>=1).sum()/x_buffer.size), 'm') - - # Run prediction. - # The gamma value is used to allow for boosting/reducing the intensity of - # the reconstructed highlights. If y = f(x) is the reconstruction, the gamma - # g alters this according to y = f(x^(1/g))^g - print_("\tInference...") - feed_dict = {x: np.power(np.maximum(x_buffer, 0.0), 1.0/FLAGS.gamma)} - y_predict = sess.run([y], feed_dict=feed_dict) - y_predict = np.power(np.maximum(y_predict, 0.0), FLAGS.gamma) - print_("\tdone\n") - - # Gamma corrected output - y_gamma = np.power(np.maximum(y_predict, 0.0), 0.5) - - # Write to disc - print_("\tWriting...") - k += 1; - img_io.writeLDR(x_buffer, '%s/%06d_in.png' % (FLAGS.out_dir, k), -3) - img_io.writeLDR(y_gamma, '%s/%06d_out.png' % (FLAGS.out_dir, k), -3) - #img_io.writeEXR(y_predict, '%s/%06d_out.exr' % (FLAGS.out_dir, k)) - print_("\tdone\n") - - except img_io.IOException as e: - print_("\n\t\tWarning! ", 'w', True) - print_("%s\n"%e, 'w') - except Exception as e: - print_("\n\t\tError: ", 'e', True) - print_("%s\n"%e, 'e') - -print_("Done!\n") - -print_("-------------------------------------------------------------------\n\n\n", 'm') - -print_("\tCalculating Confusion Matrix...\n") -time.sleep(10) - -import Confusion_Matrix - -sess.close() - diff --git a/spaces/SilenWang/ReviewGPT/utils/ris_parser.py b/spaces/SilenWang/ReviewGPT/utils/ris_parser.py deleted file mode 100644 index dc849ad9a233822f7f67ef71be7c5cfaa692ca0c..0000000000000000000000000000000000000000 --- a/spaces/SilenWang/ReviewGPT/utils/ris_parser.py +++ /dev/null @@ -1,53 +0,0 @@ -import rispy -import pandas as pd -import io - -class RisFileException(Exception): - pass - - -class RisFile: - ''' - ris文件解析器, 使用rispy模块 - ''' - def __init__(self, file): - self.file = file - self.fHandle = None - - - def _fetch_info(self, kwd: list): - collected = [] - for entry in rispy.load(self.fHanlde): - rec = {} - for key in kwd: - if not key in self.keywords: - raise RisFileException(f'Not valid info that can be parsed from ris file, all keywords: {self.keywords}') - rec[key] = entry[key] - collected.append(rec) - return pd.DataFrame(collected) - - - def parse_info(self, kwd: list): - ''' - 解析给定区域的数值, 如果字段不存在则抛出错误 - ''' - if isinstance(self.file, str): - with open(self.file, 'r') as self.fHanlde: - return self._fetch_info(kwd) - elif isinstance(self.file, io.StringIO): - self.fHanlde = self.file - return self._fetch_info(kwd) - - - @property - def keywords(self): - ''' - 调用rispy给出可解析的所有字段 - ''' - return set(rispy.TAG_KEY_MAPPING.values()) - - -if __name__ == "__main__": - risFile = RisFile(file='/home/silen/git_proj/ReviewGPT/test/G1/Paper035') - print(risFile.keywords) - print(risFile.parse_info(kwd=['doi', 'title', 'abstract'])) diff --git a/spaces/SoUmNerd/Phind-Phind-CodeLlama-34B-Python-v1/README.md b/spaces/SoUmNerd/Phind-Phind-CodeLlama-34B-Python-v1/README.md deleted file mode 100644 index 1b2009a2c60abc44c1fcffa3cab9021832d323f6..0000000000000000000000000000000000000000 --- a/spaces/SoUmNerd/Phind-Phind-CodeLlama-34B-Python-v1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Phind Phind CodeLlama 34B Python V1 -emoji: 📈 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Solis/Solis/llm_src/utils/solis/__init__.py b/spaces/Solis/Solis/llm_src/utils/solis/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/adversarial/discriminators/mpd.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/adversarial/discriminators/mpd.py deleted file mode 100644 index 8debd1fa72d77ca03df680facb60bdf79638cade..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/adversarial/discriminators/mpd.py +++ /dev/null @@ -1,106 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from ...modules import NormConv2d -from .base import MultiDiscriminator, MultiDiscriminatorOutputType - - -def get_padding(kernel_size: int, dilation: int = 1) -> int: - return int((kernel_size * dilation - dilation) / 2) - - -class PeriodDiscriminator(nn.Module): - """Period sub-discriminator. - - Args: - period (int): Period between samples of audio. - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - n_layers (int): Number of convolutional layers. - kernel_sizes (list of int): Kernel sizes for convolutions. - stride (int): Stride for convolutions. - filters (int): Initial number of filters in convolutions. - filters_scale (int): Multiplier of number of filters as we increase depth. - max_filters (int): Maximum number of filters. - norm (str): Normalization method. - activation (str): Activation function. - activation_params (dict): Parameters to provide to the activation function. - """ - def __init__(self, period: int, in_channels: int = 1, out_channels: int = 1, - n_layers: int = 5, kernel_sizes: tp.List[int] = [5, 3], stride: int = 3, - filters: int = 8, filters_scale: int = 4, max_filters: int = 1024, - norm: str = 'weight_norm', activation: str = 'LeakyReLU', - activation_params: dict = {'negative_slope': 0.2}): - super().__init__() - self.period = period - self.n_layers = n_layers - self.activation = getattr(torch.nn, activation)(**activation_params) - self.convs = nn.ModuleList() - in_chs = in_channels - for i in range(self.n_layers): - out_chs = min(filters * (filters_scale ** (i + 1)), max_filters) - eff_stride = 1 if i == self.n_layers - 1 else stride - self.convs.append(NormConv2d(in_chs, out_chs, kernel_size=(kernel_sizes[0], 1), stride=(eff_stride, 1), - padding=((kernel_sizes[0] - 1) // 2, 0), norm=norm)) - in_chs = out_chs - self.conv_post = NormConv2d(in_chs, out_channels, kernel_size=(kernel_sizes[1], 1), stride=1, - padding=((kernel_sizes[1] - 1) // 2, 0), norm=norm) - - def forward(self, x: torch.Tensor): - fmap = [] - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), 'reflect') - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for conv in self.convs: - x = conv(x) - x = self.activation(x) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - # x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(MultiDiscriminator): - """Multi-Period (MPD) Discriminator. - - Args: - in_channels (int): Number of input channels. - out_channels (int): Number of output channels. - periods (Sequence[int]): Periods between samples of audio for the sub-discriminators. - **kwargs: Additional args for `PeriodDiscriminator` - """ - def __init__(self, in_channels: int = 1, out_channels: int = 1, - periods: tp.Sequence[int] = [2, 3, 5, 7, 11], **kwargs): - super().__init__() - self.discriminators = nn.ModuleList([ - PeriodDiscriminator(p, in_channels, out_channels, **kwargs) for p in periods - ]) - - @property - def num_discriminators(self): - return len(self.discriminators) - - def forward(self, x: torch.Tensor) -> MultiDiscriminatorOutputType: - logits = [] - fmaps = [] - for disc in self.discriminators: - logit, fmap = disc(x) - logits.append(logit) - fmaps.append(fmap) - return logits, fmaps diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/export_legacy.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/export_legacy.py deleted file mode 100644 index 52f145f3148c3e9fdba436273bc45480fbae6481..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/utils/export_legacy.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Legacy functions used at the time of the first release, kept for referencd. -""" - -from pathlib import Path -import typing as tp - -from omegaconf import OmegaConf, DictConfig -import torch - - -def _clean_lm_cfg(cfg: DictConfig): - OmegaConf.set_struct(cfg, False) - # This used to be set automatically in the LM solver, need a more robust solution - # for the future. - cfg['transformer_lm']['card'] = 2048 - cfg['transformer_lm']['n_q'] = 4 - # Experimental params no longer supported. - bad_params = ['spectral_norm_attn_iters', 'spectral_norm_ff_iters', - 'residual_balancer_attn', 'residual_balancer_ff', 'layer_drop'] - for name in bad_params: - del cfg['transformer_lm'][name] - OmegaConf.set_struct(cfg, True) - return cfg - - -def export_encodec(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]): - sig = Path(checkpoint_path).parent.name - assert len(sig) == 8, "Not a valid Dora signature" - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['ema']['state']['model'], - 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']), - } - out_file = Path(out_folder) / f'{sig}.th' - torch.save(new_pkg, out_file) - return out_file - - -def export_lm(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]): - sig = Path(checkpoint_path).parent.name - assert len(sig) == 8, "Not a valid Dora signature" - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['fsdp_best_state']['model'], - 'xp.cfg': OmegaConf.to_yaml(_clean_lm_cfg(pkg['xp.cfg'])) - } - out_file = Path(out_folder) / f'{sig}.th' - torch.save(new_pkg, out_file) - return out_file diff --git a/spaces/Sumit7864/Image-Enhancer/docs/model_zoo.md b/spaces/Sumit7864/Image-Enhancer/docs/model_zoo.md deleted file mode 100644 index 132cc514bac6b447addac8485e0622a834d34474..0000000000000000000000000000000000000000 --- a/spaces/Sumit7864/Image-Enhancer/docs/model_zoo.md +++ /dev/null @@ -1,49 +0,0 @@ -# :european_castle: Model Zoo - -- [For General Images](#for-general-images) -- [For Anime Images](#for-anime-images) -- [For Anime Videos](#for-anime-videos) - ---- - -## For General Images - -| Models | Scale | Description | -| ------------------------------------------------------------------------------------------------------------------------------- | :---- | :------------------------------------------- | -| [RealESRGAN_x4plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth) | X4 | X4 model for general images | -| [RealESRGAN_x2plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth) | X2 | X2 model for general images | -| [RealESRNet_x4plus](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/RealESRNet_x4plus.pth) | X4 | X4 model with MSE loss (over-smooth effects) | -| [official ESRGAN_x4](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.1/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth) | X4 | official ESRGAN model | -| [realesr-general-x4v3](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth) | X4 (can also be used for X1, X2, X3) | A tiny small model (consume much fewer GPU memory and time); not too strong deblur and denoise capacity | - -The following models are **discriminators**, which are usually used for fine-tuning. - -| Models | Corresponding model | -| ---------------------------------------------------------------------------------------------------------------------- | :------------------ | -| [RealESRGAN_x4plus_netD](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x4plus_netD.pth) | RealESRGAN_x4plus | -| [RealESRGAN_x2plus_netD](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.3/RealESRGAN_x2plus_netD.pth) | RealESRGAN_x2plus | - -## For Anime Images / Illustrations - -| Models | Scale | Description | -| ------------------------------------------------------------------------------------------------------------------------------ | :---- | :---------------------------------------------------------- | -| [RealESRGAN_x4plus_anime_6B](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth) | X4 | Optimized for anime images; 6 RRDB blocks (smaller network) | - -The following models are **discriminators**, which are usually used for fine-tuning. - -| Models | Corresponding model | -| ---------------------------------------------------------------------------------------------------------------------------------------- | :------------------------- | -| [RealESRGAN_x4plus_anime_6B_netD](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B_netD.pth) | RealESRGAN_x4plus_anime_6B | - -## For Animation Videos - -| Models | Scale | Description | -| ---------------------------------------------------------------------------------------------------------------------------------- | :---- | :----------------------------- | -| [realesr-animevideov3](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth) | X41 | Anime video model with XS size | - -Note:
-1 This model can also be used for X1, X2, X3. - -The following models are **discriminators**, which are usually used for fine-tuning. - -TODO diff --git a/spaces/Sunbird/runyankole2english-stt/stitched_model.py b/spaces/Sunbird/runyankole2english-stt/stitched_model.py deleted file mode 100644 index 06933709570347fbde2a19c417822c44f70591c6..0000000000000000000000000000000000000000 --- a/spaces/Sunbird/runyankole2english-stt/stitched_model.py +++ /dev/null @@ -1,30 +0,0 @@ -import torch -from torch import nn -from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC, AutoTokenizer, AutoModelForSeq2SeqLM - -class CombinedModel(nn.Module): - def __init__(self, stt_model_name, nmt_model_name,device = "cuda"): - super(CombinedModel, self).__init__() - - self.stt_processor = Wav2Vec2Processor.from_pretrained(stt_model_name) - self.stt_model = Wav2Vec2ForCTC.from_pretrained(stt_model_name) - self.nmt_tokenizer = AutoTokenizer.from_pretrained(nmt_model_name) - self.nmt_model = AutoModelForSeq2SeqLM.from_pretrained(nmt_model_name) - self.device = device - - def forward(self, batch, *args, **kwargs): - # Use stt_model to transcribe the audio to text - device = self.device - audio = torch.tensor(batch["audio"][0]).to(self.device) - input_features = self.stt_processor(audio,sampling_rate=16000, return_tensors="pt",max_length=110000, padding=True, truncation=True) - stt_output = self.stt_model(input_features.input_values.to(device), attention_mask= input_features.attention_mask.to(device) ) - transcription = self.stt_processor.decode(torch.squeeze(stt_output.logits.argmax(axis=-1)).to(device)) - input_nmt_tokens = self.nmt_tokenizer(transcription, return_tensors="pt", padding=True, truncation=True) - output_nmt_output = self.nmt_model.generate(input_ids = input_nmt_tokens.input_ids.to(device), attention_mask= input_nmt_tokens.attention_mask.to(device)) - decoded_nmt_output = self.nmt_tokenizer.batch_decode(output_nmt_output, skip_special_tokens=True) - - - return transcription, decoded_nmt_output - -# Usage -#model = CombinedModel("ak3ra/wav2vec2-sunbird-speech-lug", "Sunbird/sunbird-mul-en-mbart-merged", device="cpu") diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_xml.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_xml.py deleted file mode 100644 index 5d1ed0fd729f8bb70c6443b6c12e4d96eec8ae9f..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_xml.py +++ /dev/null @@ -1,424 +0,0 @@ -from _pydev_bundle import pydev_log -from _pydevd_bundle import pydevd_extension_utils -from _pydevd_bundle import pydevd_resolver -import sys -from _pydevd_bundle.pydevd_constants import BUILTINS_MODULE_NAME, MAXIMUM_VARIABLE_REPRESENTATION_SIZE, \ - RETURN_VALUES_DICT, LOAD_VALUES_ASYNC, DEFAULT_VALUE -from _pydev_bundle.pydev_imports import quote -from _pydevd_bundle.pydevd_extension_api import TypeResolveProvider, StrPresentationProvider -from _pydevd_bundle.pydevd_utils import isinstance_checked, hasattr_checked, DAPGrouper -from _pydevd_bundle.pydevd_resolver import get_var_scope, MoreItems, MoreItemsRange -from typing import Optional - -try: - import types - - frame_type = types.FrameType -except: - frame_type = None - - -def make_valid_xml_value(s): - # Same thing as xml.sax.saxutils.escape but also escaping double quotes. - return s.replace("&", "&").replace('<', '<').replace('>', '>').replace('"', '"') - - -class ExceptionOnEvaluate: - - def __init__(self, result, etype, tb): - self.result = result - self.etype = etype - self.tb = tb - - -_IS_JYTHON = sys.platform.startswith("java") - - -def _create_default_type_map(): - default_type_map = [ - # None means that it should not be treated as a compound variable - - # isintance does not accept a tuple on some versions of python, so, we must declare it expanded - (type(None), None,), - (int, None), - (float, None), - (complex, None), - (str, None), - (tuple, pydevd_resolver.tupleResolver), - (list, pydevd_resolver.tupleResolver), - (dict, pydevd_resolver.dictResolver), - ] - try: - from collections import OrderedDict - default_type_map.insert(0, (OrderedDict, pydevd_resolver.orderedDictResolver)) - # we should put it before dict - except: - pass - - try: - default_type_map.append((long, None)) # @UndefinedVariable - except: - pass # not available on all python versions - - default_type_map.append((DAPGrouper, pydevd_resolver.dapGrouperResolver)) - default_type_map.append((MoreItems, pydevd_resolver.forwardInternalResolverToObject)) - default_type_map.append((MoreItemsRange, pydevd_resolver.forwardInternalResolverToObject)) - - try: - default_type_map.append((set, pydevd_resolver.setResolver)) - except: - pass # not available on all python versions - - try: - default_type_map.append((frozenset, pydevd_resolver.setResolver)) - except: - pass # not available on all python versions - - try: - from django.utils.datastructures import MultiValueDict - default_type_map.insert(0, (MultiValueDict, pydevd_resolver.multiValueDictResolver)) - # we should put it before dict - except: - pass # django may not be installed - - try: - from django.forms import BaseForm - default_type_map.insert(0, (BaseForm, pydevd_resolver.djangoFormResolver)) - # we should put it before instance resolver - except: - pass # django may not be installed - - try: - from collections import deque - default_type_map.append((deque, pydevd_resolver.dequeResolver)) - except: - pass - - try: - from ctypes import Array - default_type_map.append((Array, pydevd_resolver.tupleResolver)) - except: - pass - - if frame_type is not None: - default_type_map.append((frame_type, pydevd_resolver.frameResolver)) - - if _IS_JYTHON: - from org.python import core # @UnresolvedImport - default_type_map.append((core.PyNone, None)) - default_type_map.append((core.PyInteger, None)) - default_type_map.append((core.PyLong, None)) - default_type_map.append((core.PyFloat, None)) - default_type_map.append((core.PyComplex, None)) - default_type_map.append((core.PyString, None)) - default_type_map.append((core.PyTuple, pydevd_resolver.tupleResolver)) - default_type_map.append((core.PyList, pydevd_resolver.tupleResolver)) - default_type_map.append((core.PyDictionary, pydevd_resolver.dictResolver)) - default_type_map.append((core.PyStringMap, pydevd_resolver.dictResolver)) - - if hasattr(core, 'PyJavaInstance'): - # Jython 2.5b3 removed it. - default_type_map.append((core.PyJavaInstance, pydevd_resolver.instanceResolver)) - - return default_type_map - - -class TypeResolveHandler(object): - NO_PROVIDER = [] # Sentinel value (any mutable object to be used as a constant would be valid). - - def __init__(self): - # Note: don't initialize with the types we already know about so that the extensions can override - # the default resolvers that are already available if they want. - self._type_to_resolver_cache = {} - self._type_to_str_provider_cache = {} - self._initialized = False - - def _initialize(self): - self._default_type_map = _create_default_type_map() - self._resolve_providers = pydevd_extension_utils.extensions_of_type(TypeResolveProvider) - self._str_providers = pydevd_extension_utils.extensions_of_type(StrPresentationProvider) - self._initialized = True - - def get_type(self, o): - try: - try: - # Faster than type(o) as we don't need the function call. - type_object = o.__class__ # could fail here - type_name = type_object.__name__ - return self._get_type(o, type_object, type_name) # could fail here - except: - # Not all objects have __class__ (i.e.: there are bad bindings around). - type_object = type(o) - type_name = type_object.__name__ - - try: - return self._get_type(o, type_object, type_name) - except: - if isinstance(type_object, type): - # If it's still something manageable, use the default resolver, otherwise - # fallback to saying that it wasn't possible to get any info on it. - return type_object, str(type_name), pydevd_resolver.defaultResolver - - return 'Unable to get Type', 'Unable to get Type', None - except: - # This happens for org.python.core.InitModule - return 'Unable to get Type', 'Unable to get Type', None - - def _get_type(self, o, type_object, type_name): - # Note: we could have an exception here if the type_object is not hashable... - resolver = self._type_to_resolver_cache.get(type_object) - if resolver is not None: - return type_object, type_name, resolver - - if not self._initialized: - self._initialize() - - try: - for resolver in self._resolve_providers: - if resolver.can_provide(type_object, type_name): - # Cache it - self._type_to_resolver_cache[type_object] = resolver - return type_object, type_name, resolver - - for t in self._default_type_map: - if isinstance_checked(o, t[0]): - # Cache it - resolver = t[1] - self._type_to_resolver_cache[type_object] = resolver - return (type_object, type_name, resolver) - except: - pydev_log.exception() - - # No match return default (and cache it). - resolver = pydevd_resolver.defaultResolver - self._type_to_resolver_cache[type_object] = resolver - return type_object, type_name, resolver - - if _IS_JYTHON: - _base_get_type = _get_type - - def _get_type(self, o, type_object, type_name): - if type_name == 'org.python.core.PyJavaInstance': - return type_object, type_name, pydevd_resolver.instanceResolver - - if type_name == 'org.python.core.PyArray': - return type_object, type_name, pydevd_resolver.jyArrayResolver - - return self._base_get_type(o, type_object, type_name) - - def _get_str_from_provider(self, provider, o, context: Optional[str]=None): - if context is not None: - get_str_in_context = getattr(provider, 'get_str_in_context', None) - if get_str_in_context is not None: - return get_str_in_context(o, context) - - return provider.get_str(o) - - def str_from_providers(self, o, type_object, type_name, context: Optional[str]=None): - provider = self._type_to_str_provider_cache.get(type_object) - - if provider is self.NO_PROVIDER: - return None - - if provider is not None: - return self._get_str_from_provider(provider, o, context) - - if not self._initialized: - self._initialize() - - for provider in self._str_providers: - if provider.can_provide(type_object, type_name): - self._type_to_str_provider_cache[type_object] = provider - try: - return self._get_str_from_provider(provider, o, context) - except: - pydev_log.exception("Error when getting str with custom provider: %s." % (provider,)) - - self._type_to_str_provider_cache[type_object] = self.NO_PROVIDER - return None - - -_TYPE_RESOLVE_HANDLER = TypeResolveHandler() - -""" -def get_type(o): - Receives object and returns a triple (type_object, type_string, resolver). - - resolver != None means that variable is a container, and should be displayed as a hierarchy. - - Use the resolver to get its attributes. - - All container objects (i.e.: dict, list, tuple, object, etc) should have a resolver. -""" -get_type = _TYPE_RESOLVE_HANDLER.get_type - -_str_from_providers = _TYPE_RESOLVE_HANDLER.str_from_providers - - -def is_builtin(x): - return getattr(x, '__module__', None) == BUILTINS_MODULE_NAME - - -def should_evaluate_full_value(val): - return not LOAD_VALUES_ASYNC or (is_builtin(type(val)) and not isinstance_checked(val, (list, tuple, dict))) - - -def return_values_from_dict_to_xml(return_dict): - res = [] - for name, val in return_dict.items(): - res.append(var_to_xml(val, name, additional_in_xml=' isRetVal="True"')) - return ''.join(res) - - -def frame_vars_to_xml(frame_f_locals, hidden_ns=None): - """ dumps frame variables to XML - - """ - xml = [] - - keys = sorted(frame_f_locals) - - return_values_xml = [] - - for k in keys: - try: - v = frame_f_locals[k] - eval_full_val = should_evaluate_full_value(v) - - if k == '_pydev_stop_at_break': - continue - - if k == RETURN_VALUES_DICT: - for name, val in v.items(): - return_values_xml.append(var_to_xml(val, name, additional_in_xml=' isRetVal="True"')) - - else: - if hidden_ns is not None and k in hidden_ns: - xml.append(var_to_xml(v, str(k), additional_in_xml=' isIPythonHidden="True"', - evaluate_full_value=eval_full_val)) - else: - xml.append(var_to_xml(v, str(k), evaluate_full_value=eval_full_val)) - except Exception: - pydev_log.exception("Unexpected error, recovered safely.") - - # Show return values as the first entry. - return_values_xml.extend(xml) - return ''.join(return_values_xml) - - -def get_variable_details(val, evaluate_full_value=True, to_string=None, context: Optional[str]=None): - ''' - :param context: - This is the context in which the variable is being requested. Valid values: - "watch", - "repl", - "hover", - "clipboard" - ''' - try: - # This should be faster than isinstance (but we have to protect against not having a '__class__' attribute). - is_exception_on_eval = val.__class__ == ExceptionOnEvaluate - except: - is_exception_on_eval = False - - if is_exception_on_eval: - v = val.result - else: - v = val - - _type, type_name, resolver = get_type(v) - type_qualifier = getattr(_type, "__module__", "") - if not evaluate_full_value: - value = DEFAULT_VALUE - else: - try: - str_from_provider = _str_from_providers(v, _type, type_name, context) - if str_from_provider is not None: - value = str_from_provider - - elif to_string is not None: - value = to_string(v) - - elif hasattr_checked(v, '__class__'): - if v.__class__ == frame_type: - value = pydevd_resolver.frameResolver.get_frame_name(v) - - elif v.__class__ in (list, tuple): - if len(v) > 300: - value = '%s: %s' % (str(v.__class__), '' % (len(v),)) - else: - value = '%s: %s' % (str(v.__class__), v) - else: - try: - cName = str(v.__class__) - if cName.find('.') != -1: - cName = cName.split('.')[-1] - - elif cName.find("'") != -1: # does not have '.' (could be something like ) - cName = cName[cName.index("'") + 1:] - - if cName.endswith("'>"): - cName = cName[:-2] - except: - cName = str(v.__class__) - - value = '%s: %s' % (cName, v) - else: - value = str(v) - except: - try: - value = repr(v) - except: - value = 'Unable to get repr for %s' % v.__class__ - - # fix to work with unicode values - try: - if value.__class__ == bytes: - value = value.decode('utf-8', 'replace') - except TypeError: - pass - - return type_name, type_qualifier, is_exception_on_eval, resolver, value - - -def var_to_xml(val, name, trim_if_too_big=True, additional_in_xml='', evaluate_full_value=True): - """ single variable or dictionary to xml representation """ - - type_name, type_qualifier, is_exception_on_eval, resolver, value = get_variable_details( - val, evaluate_full_value) - - scope = get_var_scope(name, val, '', True) - try: - name = quote(name, '/>_= ') # TODO: Fix PY-5834 without using quote - except: - pass - - xml = ' MAXIMUM_VARIABLE_REPRESENTATION_SIZE and trim_if_too_big: - value = value[0:MAXIMUM_VARIABLE_REPRESENTATION_SIZE] - value += '...' - - xml_value = ' value="%s"' % (make_valid_xml_value(quote(value, '/>_= '))) - else: - xml_value = '' - - if is_exception_on_eval: - xml_container = ' isErrorOnEval="True"' - else: - if resolver is not None: - xml_container = ' isContainer="True"' - else: - xml_container = '' - - if scope: - return ''.join((xml, xml_qualifier, xml_value, xml_container, additional_in_xml, ' scope="', scope, '"', ' />\n')) - else: - return ''.join((xml, xml_qualifier, xml_value, xml_container, additional_in_xml, ' />\n')) diff --git a/spaces/Suniilkumaar/MusicGen-updated/setup.py b/spaces/Suniilkumaar/MusicGen-updated/setup.py deleted file mode 100644 index 78a172b7c90003b689bde40b49cc8fe1fb8107d4..0000000000000000000000000000000000000000 --- a/spaces/Suniilkumaar/MusicGen-updated/setup.py +++ /dev/null @@ -1,65 +0,0 @@ -""" - Copyright (c) Meta Platforms, Inc. and affiliates. - All rights reserved. - - This source code is licensed under the license found in the - LICENSE file in the root directory of this source tree. - -""" - -from pathlib import Path - -from setuptools import setup, find_packages - - -NAME = 'audiocraft' -DESCRIPTION = 'Audio research library for PyTorch' - -URL = 'https://github.com/fairinternal/audiocraft' -AUTHOR = 'FAIR Speech & Audio' -EMAIL = 'defossez@meta.com' -REQUIRES_PYTHON = '>=3.8.0' - -for line in open('audiocraft/__init__.py'): - line = line.strip() - if '__version__' in line: - context = {} - exec(line, context) - VERSION = context['__version__'] - -HERE = Path(__file__).parent - -try: - with open(HERE / "README.md", encoding='utf-8') as f: - long_description = '\n' + f.read() -except FileNotFoundError: - long_description = DESCRIPTION - -REQUIRED = [i.strip() for i in open(HERE / 'requirements.txt') if not i.startswith('#')] - -setup( - name=NAME, - version=VERSION, - description=DESCRIPTION, - author_email=EMAIL, - long_description=long_description, - long_description_content_type='text/markdown', - author=AUTHOR, - url=URL, - python_requires=REQUIRES_PYTHON, - install_requires=REQUIRED, - extras_require={ - 'dev': ['coverage', 'flake8', 'mypy', 'pdoc3', 'pytest'], - }, - packages=find_packages(), - package_data={'audiocraft': ['py.typed']}, - include_package_data=True, - license='MIT License', - classifiers=[ - # Trove classifiers - # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers - 'License :: OSI Approved :: MIT License', - 'Topic :: Multimedia :: Sound/Audio', - 'Topic :: Scientific/Engineering :: Artificial Intelligence', - ], -) diff --git a/spaces/TH5314/newbing/src/components/button-scroll-to-bottom.tsx b/spaces/TH5314/newbing/src/components/button-scroll-to-bottom.tsx deleted file mode 100644 index b68ab9c0e48320c356e51a52d11b9ca63909e6c5..0000000000000000000000000000000000000000 --- a/spaces/TH5314/newbing/src/components/button-scroll-to-bottom.tsx +++ /dev/null @@ -1,34 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' -import { useAtBottom } from '@/lib/hooks/use-at-bottom' -import { Button, type ButtonProps } from '@/components/ui/button' -import { IconArrowDown } from '@/components/ui/icons' - -export function ButtonScrollToBottom({ className, ...props }: ButtonProps) { - const isAtBottom = useAtBottom() - - return ( - - ) -} diff --git a/spaces/TMojo/FoodVision_Mini/README.md b/spaces/TMojo/FoodVision_Mini/README.md deleted file mode 100644 index d00842f63f070efcd927182c093cafb2d1500a18..0000000000000000000000000000000000000000 --- a/spaces/TMojo/FoodVision_Mini/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: FoodVision Mini -emoji: 🌍 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/help.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/help.py deleted file mode 100644 index 2d292c2f062cd80cd108aac503eae7b635ceec8d..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/help.py +++ /dev/null @@ -1,131 +0,0 @@ -"""Module containing bug report helper(s).""" - -import json -import platform -import ssl -import sys - -from pip._vendor import idna -from pip._vendor import urllib3 - -from . import __version__ as requests_version - -charset_normalizer = None - -try: - from pip._vendor import chardet -except ImportError: - chardet = None - -try: - from pip._vendor.urllib3.contrib import pyopenssl -except ImportError: - pyopenssl = None - OpenSSL = None - cryptography = None -else: - import cryptography - import OpenSSL - - -def _implementation(): - """Return a dict with the Python implementation and version. - - Provide both the name and the version of the Python implementation - currently running. For example, on CPython 3.10.3 it will return - {'name': 'CPython', 'version': '3.10.3'}. - - This function works best on CPython and PyPy: in particular, it probably - doesn't work for Jython or IronPython. Future investigation should be done - to work out the correct shape of the code for those platforms. - """ - implementation = platform.python_implementation() - - if implementation == "CPython": - implementation_version = platform.python_version() - elif implementation == "PyPy": - implementation_version = "{}.{}.{}".format( - sys.pypy_version_info.major, - sys.pypy_version_info.minor, - sys.pypy_version_info.micro, - ) - if sys.pypy_version_info.releaselevel != "final": - implementation_version = "".join( - [implementation_version, sys.pypy_version_info.releaselevel] - ) - elif implementation == "Jython": - implementation_version = platform.python_version() # Complete Guess - elif implementation == "IronPython": - implementation_version = platform.python_version() # Complete Guess - else: - implementation_version = "Unknown" - - return {"name": implementation, "version": implementation_version} - - -def info(): - """Generate information for a bug report.""" - try: - platform_info = { - "system": platform.system(), - "release": platform.release(), - } - except OSError: - platform_info = { - "system": "Unknown", - "release": "Unknown", - } - - implementation_info = _implementation() - urllib3_info = {"version": urllib3.__version__} - charset_normalizer_info = {"version": None} - chardet_info = {"version": None} - if charset_normalizer: - charset_normalizer_info = {"version": charset_normalizer.__version__} - if chardet: - chardet_info = {"version": chardet.__version__} - - pyopenssl_info = { - "version": None, - "openssl_version": "", - } - if OpenSSL: - pyopenssl_info = { - "version": OpenSSL.__version__, - "openssl_version": f"{OpenSSL.SSL.OPENSSL_VERSION_NUMBER:x}", - } - cryptography_info = { - "version": getattr(cryptography, "__version__", ""), - } - idna_info = { - "version": getattr(idna, "__version__", ""), - } - - system_ssl = ssl.OPENSSL_VERSION_NUMBER - system_ssl_info = {"version": f"{system_ssl:x}" if system_ssl is not None else ""} - - return { - "platform": platform_info, - "implementation": implementation_info, - "system_ssl": system_ssl_info, - "using_pyopenssl": pyopenssl is not None, - "using_charset_normalizer": chardet is None, - "pyOpenSSL": pyopenssl_info, - "urllib3": urllib3_info, - "chardet": chardet_info, - "charset_normalizer": charset_normalizer_info, - "cryptography": cryptography_info, - "idna": idna_info, - "requests": { - "version": requests_version, - }, - } - - -def main(): - """Pretty-print the bug information as JSON.""" - print(json.dumps(info(), sort_keys=True, indent=2)) - - -if __name__ == "__main__": - main() diff --git a/spaces/TechnoByte/ComfyUI-Kybalico/Dockerfile b/spaces/TechnoByte/ComfyUI-Kybalico/Dockerfile deleted file mode 100644 index 370a62f02181c2cb9bb1242506ace5b0c5d8da4d..0000000000000000000000000000000000000000 --- a/spaces/TechnoByte/ComfyUI-Kybalico/Dockerfile +++ /dev/null @@ -1,104 +0,0 @@ -FROM nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04 - -ENV DEBIAN_FRONTEND=noninteractive \ - TZ=America/Los_Angeles - -ARG USE_PERSISTENT_DATA - -RUN apt-get update && apt-get install -y \ - git \ - make build-essential libssl-dev zlib1g-dev \ - libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \ - libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev git-lfs \ - ffmpeg libsm6 libxext6 cmake libgl1-mesa-glx \ - && rm -rf /var/lib/apt/lists/* \ - && git lfs install - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -# User -RUN useradd -m -u 1000 user -USER user -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Pyenv -RUN curl https://pyenv.run | bash -ENV PATH=$HOME/.pyenv/shims:$HOME/.pyenv/bin:$PATH - -ARG PYTHON_VERSION=3.9.17 -# Python -RUN pyenv install $PYTHON_VERSION && \ - pyenv global $PYTHON_VERSION && \ - pyenv rehash && \ - pip install --no-cache-dir --upgrade pip setuptools wheel && \ - pip install --no-cache-dir \ - datasets \ - huggingface-hub "protobuf<4" "click<8.1" - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -# Set the working directory to /data if USE_PERSISTENT_DATA is set, otherwise set to $HOME/app -WORKDIR $HOME/app - -# Clone the ComfyUI repo (fork with restart button) -RUN git clone https://github.com/ThisModernDay/ComfyUI . && \ - pip install --no-cache-dir -r requirements.txt - -# Checkpoints -RUN echo "Downloading checkpoints..." && \ - # Kybalico Models - # wget -c https://huggingface.co/Kybalico/CandyApple/resolve/main/candyApple_v12.safetensors -P ./models/checkpoints/ && \ - wget -cq https://huggingface.co/Kybalico/CalicoMix/resolve/main/calicoMix_v75.safetensors -P ./models/checkpoints/ && \ - # wget -c https://huggingface.co/Kybalico/CalicoMixDC/resolve/main/calicomix_dcV30.safetensors -P ./models/checkpoints/ && \ - # wget -c https://huggingface.co/Kybalico/AnmitsuMimimi/resolve/main/anmitsuMimimi_v10.safetensors -P ./models/checkpoints/ && \ - - # TechnoByte Models - wget -cq https://huggingface.co/TechnoByte/MilkyWonderland/resolve/main/milkyWonderland_v20.safetensors -P ./models/checkpoints/ && \ - - # VAE - rm -rf ./models/vae && \ - git clone https://huggingface.co/Kefasu/sd-vae-collection ./models/vae/ --depth=1 && \ - wget -cq https://huggingface.co/RedRayz/MyVAE/resolve/main/CleanVAE.safetensors -P ./models/vae/ && \ - - # ControlNet - # wget -c https://huggingface.co/comfyanonymous/ControlNet-v1-1_fp16_safetensors/resolve/main/control_v11p_sd15s2_lineart_anime_fp16.safetensors -P ./models/controlnet/ && \ - - # GLIGEN - # wget -c https://huggingface.co/comfyanonymous/GLIGEN_pruned_safetensors/resolve/main/gligen_sd14_textbox_pruned_fp16.safetensors -P ./models/gligen/ && \ - - # ESRGAN upscale models - rm -rf ./models/upscale_models && \ - git clone https://huggingface.co/utnah/esrgan ./models/upscale_models/ --depth=1 && \ - - # Aesthetic scorer models - mkdir ./models/aesthetic && \ - wget -c https://github.com/grexzen/SD-Chad/raw/main/chadscorer.pth -P ./models/aesthetic/ && \ - wget -c https://github.com/christophschuhmann/improved-aesthetic-predictor/raw/main/ava+logos-l14-linearMSE.pth -P ./models/aesthetic/ && \ - - # ComfyUI Manager - cd custom_nodes && git clone https://github.com/ltdrdata/ComfyUI-Manager.git && \ - - # Install custom nodes - echo "Installing custom nodes..." - - # Controlnet Preprocessor nodes by Fannovel16 - # RUN cd custom_nodes && git clone https://github.com/Fannovel16/comfy_controlnet_preprocessors && cd comfy_controlnet_preprocessors && python install.py --no_download_ckpts - # RUN cd custom_nodes && git clone https://github.com/Fannovel16/comfyui_controlnet_aux && cd comfyui_controlnet_aux && pip install -r requirements.txt - # RUN cd custom_nodes && git clone https://github.com/Stability-AI/stability-ComfyUI-nodes && cd stability-ComfyUI-nodes && pip install -r requirements.txt - - RUN cd custom_nodes && git clone https://github.com/EllangoK/ComfyUI-post-processing-nodes --depth 1 - RUN cd custom_nodes && git clone https://github.com/TinyTerra/ComfyUI_tinyterraNodes --depth 1 - RUN cd custom_nodes && git clone https://github.com/ltdrdata/ComfyUI-Impact-Pack --depth 1 && cd ComfyUI-Impact-Pack && python install.py - RUN cd custom_nodes && git clone https://github.com/TechnoByteJS/comfy-aesthetic-nodes --depth 1 && cd comfy-aesthetic-nodes && pip install -r requirements.txt - # RUN cd custom_nodes && git clone https://github.com/rgthree/rgthree-comfy --depth 1 - - RUN echo "Done" - -CMD ["python", "main.py", "--listen", "0.0.0.0", "--cpu", "--port", "7860", "--output-directory", "${USE_PERSISTENT_DATA:+/data/}"] - - - - diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/README.md b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/README.md deleted file mode 100644 index 778ed3da0bae89820831bcd8a72ff7b9cad8d4dd..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/README.md +++ /dev/null @@ -1,7 +0,0 @@ - - -To add a new Op: - -1. Create a new directory -2. Implement new ops there -3. Delcare its Python interface in `vision.cpp`. diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/docs/Makefile b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/docs/Makefile deleted file mode 100644 index 718eddce170fe13b67216baf9d4d25b20e860506..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/docs/Makefile +++ /dev/null @@ -1,19 +0,0 @@ -# Minimal makefile for Sphinx documentation -# Copyright (c) Facebook, Inc. and its affiliates. - -# You can set these variables from the command line. -SPHINXOPTS = -SPHINXBUILD = sphinx-build -SOURCEDIR = . -BUILDDIR = _build - -# Put it first so that "make" without argument is like "make help". -help: - @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: help Makefile - -# Catch-all target: route all unknown targets to Sphinx using the new -# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). -%: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/spaces/Tetel/chat/public/dialog.css b/spaces/Tetel/chat/public/dialog.css deleted file mode 100644 index e58b690fd6e4a57afa2aa01678b1c0141d98eaba..0000000000000000000000000000000000000000 --- a/spaces/Tetel/chat/public/dialog.css +++ /dev/null @@ -1,73 +0,0 @@ -.modal { - display: flex; - justify-content: center; - align-items: center; - position: fixed; - z-index: 1; - left: 0; - top: 0; - width: 100%; - height: 100%; - overflow: auto; - background-color: rgba(0, 0, 0, 0.4); -} - -.modal-content { - background-color: #fefefe; - margin: auto; - padding: 20px; - border: 1px solid #888; - width: 80%; -} - -.close { - color: #aaaaaa; - float: right; - font-size: 28px; - font-weight: bold; -} - -.close:hover, -.close:focus { - color: #000; - text-decoration: none; - cursor: pointer; -} - -.input-field { - width: 100%; - padding: 12px 20px; - margin: 8px 0; - box-sizing: border-box; - border: 2px solid #ccc; - border-radius: 4px; -} - -.large-textarea { - width: 100%; - height: 150px; - padding: 12px 20px; - box-sizing: border-box; - border: 2px solid #ccc; - border-radius: 4px; - resize: vertical; - font-family: "Microsoft YaHei", sans-serif; -} - -.save-button { - background-color: #4CAF50; - color: white; - padding: 15px 32px; - text-align: center; - text-decoration: none; - display: inline-block; - font-size: 16px; - margin: 4px 2px; - cursor: pointer; - border: none; - border-radius: 4px; -} - -.error { - color: red; -} diff --git a/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/mcalib.py b/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/mcalib.py deleted file mode 100644 index 02ebc73ded60508c2685039489c5ee0c5b989963..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/mcalib.py +++ /dev/null @@ -1,384 +0,0 @@ -#!/usr/local/bin/python3 - -# avenir-python: Machine Learning -# Author: Pranab Ghosh -# -# Licensed under the Apache License, Version 2.0 (the "License"); you -# may not use this file except in compliance with the License. You may -# obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or -# implied. See the License for the specific language governing -# permissions and limitations under the License. - -# Package imports -import os -import sys -import matplotlib.pyplot as plt -import numpy as np -import sklearn as sk -from sklearn.neighbors import KDTree -import matplotlib -import random -import jprops -from random import randint -import statistics -sys.path.append(os.path.abspath("../lib")) -from util import * -from mlutil import * -from tnn import * -from stats import * - -""" -neural model calibration -""" -class ModelCalibration(object): - def __init__(self): - pass - - @staticmethod - def findModelCalibration(model): - """ - pmodel calibration - """ - FeedForwardNetwork.prepValidate(model) - FeedForwardNetwork.validateModel(model) - - yPred = model.yPred.flatten() - yActual = model.validOutData.flatten() - nsamp = len(yActual) - - #print(yPred.shape) - #print(yActual.shape) - - nBins = model.config.getIntConfig("calibrate.num.bins")[0] - prThreshhold = model.config.getFloatConfig("calibrate.pred.prob.thresh")[0] - - minConf = yPred.min() - maxConf = yPred.max() - bsize = (maxConf - minConf) / nBins - #print("minConf {:.3f} maxConf {:.3f} bsize {:.3f}".format(minConf, maxConf, bsize)) - blist = list(map(lambda i : None, range(nBins))) - - #binning - for yp, ya in zip(yPred, yActual): - indx = int((yp - minConf) / bsize) - if indx == nBins: - indx = nBins - 1 - #print("yp {:.3f} indx {}".format(yp, indx)) - pair = (yp, ya) - plist = blist[indx] - if plist is None: - plist = list() - blist[indx] = plist - plist.append(pair) - - x = list() - y = list() - yideal = list() - ece = 0 - mce = 0 - - # per bin confidence and accuracy - b = 0 - for plist in blist: - if plist is not None: - #confidence - ypl = list(map(lambda p : p[0], plist)) - ypm = statistics.mean(ypl) - x.append(ypm) - - #accuracy - ypcount = 0 - for p in plist: - yp = 1 if p[0] > prThreshhold else 0 - if (yp == 1 and p[1] == 1): - ypcount += 1 - - acc = ypcount / len(plist) - y.append(acc) - yideal.append(ypm) - - ce = abs(ypm - acc) - ece += len(plist) * ce - if ce > mce: - mce = ce - else: - ypm = minConf + (b + 0.5) * bsize - x.append(ypm) - yideal.append(ypm) - y.append(0) - b += 1 - - #calibration plot - drawPairPlot(x, y, yideal, "confidence", "accuracy", "actual", "ideal") - - print("confidence\taccuracy") - for z in zip(x,y): - print("{:.3f}\t{:.3f}".format(z[0], z[1])) - - - #expected calibration error - ece /= nsamp - print("expected calibration error\t{:.3f}".format(ece)) - print("maximum calibration error\t{:.3f}".format(mce)) - - - @staticmethod - def findModelCalibrationLocal(model): - """ - pmodel calibration based k nearest neghbors - """ - FeedForwardNetwork.prepValidate(model) - FeedForwardNetwork.validateModel(model) - - yPred = model.yPred.flatten() - yActual = model.validOutData.flatten() - nsamp = len(yActual) - - neighborCnt = model.config.getIntConfig("calibrate.num.nearest.neighbors")[0] - prThreshhold = model.config.getFloatConfig("calibrate.pred.prob.thresh")[0] - fData = model.validFeatData.numpy() - tree = KDTree(fData, leaf_size=4) - - dist, ind = tree.query(fData, k=neighborCnt) - calibs = list() - #all data - for si, ni in enumerate(ind): - conf = 0 - ypcount = 0 - #all neighbors - for i in ni: - conf += yPred[i] - yp = 1 if yPred[i] > prThreshhold else 0 - if (yp == 1 and yActual[i] == 1): - ypcount += 1 - conf /= neighborCnt - acc = ypcount / neighborCnt - calib = (si, conf, acc) - calibs.append(calib) - - #descending sort by difference between confidence and accuracy - calibs = sorted(calibs, key=lambda c : abs(c[1] - c[2]), reverse=True) - print("local calibration") - print("conf\taccu\trecord") - for i in range(19): - si, conf, acc = calibs[i] - rec = toStrFromList(fData[si], 3) - print("{:.3f}\t{:.3f}\t{}".format(conf, acc, rec)) - - @staticmethod - def findModelSharpness(model): - """ - pmodel calibration - """ - FeedForwardNetwork.prepValidate(model) - FeedForwardNetwork.validateModel(model) - - yPred = model.yPred.flatten() - yActual = model.validOutData.flatten() - nsamp = len(yActual) - - #print(yPred.shape) - #print(yActual.shape) - - nBins = model.config.getIntConfig("calibrate.num.bins")[0] - prThreshhold = model.config.getFloatConfig("calibrate.pred.prob.thresh")[0] - - minConf = yPred.min() - maxConf = yPred.max() - bsize = (maxConf - minConf) / nBins - #print("minConf {:.3f} maxConf {:.3f} bsize {:.3f}".format(minConf, maxConf, bsize)) - blist = list(map(lambda i : None, range(nBins))) - - #binning - for yp, ya in zip(yPred, yActual): - indx = int((yp - minConf) / bsize) - if indx == nBins: - indx = nBins - 1 - #print("yp {:.3f} indx {}".format(yp, indx)) - pair = (yp, ya) - plist = blist[indx] - if plist is None: - plist = list() - blist[indx] = plist - plist.append(pair) - - y = list() - ypgcount = 0 - # per bin confidence and accuracy - for plist in blist: - #ypl = list(map(lambda p : p[0], plist)) - #ypm = statistics.mean(ypl) - #x.append(ypm) - - ypcount = 0 - for p in plist: - yp = 1 if p[0] > prThreshhold else 0 - if (yp == 1 and p[1] == 1): - ypcount += 1 - ypgcount += 1 - - acc = ypcount / len(plist) - y.append(acc) - - print("{} {}".format(ypgcount, nsamp)) - accg = ypgcount / nsamp - accgl = [accg] * nBins - x = list(range(nBins)) - drawPairPlot(x, y, accgl, "discretized confidence", "accuracy", "local", "global") - - contrast = list(map(lambda acc : abs(acc - accg), y)) - contrast = statistics.mean(contrast) - print("contrast {:.3f}".format(contrast)) - -""" -neural model robustness -""" -class ModelRobustness(object): - def __init__(self): - pass - - def localPerformance(self, model, fpath, nsamp, neighborCnt): - """ - local performnance sampling - """ - - #load data - fData, oData = FeedForwardNetwork.prepData(model, fpath) - #print(type(fData)) - #print(type(oData)) - #print(fData.shape) - dsize = fData.shape[0] - ncol = fData.shape[1] - - #kdd - tree = KDTree(fData, leaf_size=4) - - scores = list() - indices = list() - for _ in range(nsamp): - indx = randomInt(0, dsize - 1) - indices.append(indx) - frow = fData[indx] - frow = np.reshape(frow, (1, ncol)) - dist, ind = tree.query(frow, k=neighborCnt) - - ind = ind[0] - vfData = fData[ind] - voData = oData[ind] - - #print(type(vfData)) - #print(vfData.shape) - #print(type(voData)) - #print(voData.shape) - - model.setValidationData((vfData, voData), False) - score = FeedForwardNetwork.validateModel(model) - scores.append(score) - - #performance distribution - m, s = basicStat(scores) - print("model performance: mean {:.3f}\tstd dev {:.3f}".format(m,s)) - drawHist(scores, "model accuracy", "accuracy", "frequency") - - #worst performance - lscores = sorted(zip(indices, scores), key=lambda s : s[1]) - print(lscores[:5]) - - lines = getFileLines(fpath, None) - print("worst performing features regions") - for i,s in lscores[:5]: - print("score {:.3f}\t{}".format(s, lines[i])) - - -""" -conformal prediction for regression -""" -class ConformalRegressionPrediction(object): - def __init__(self): - self.calibration = dict() - - def calibrate(self, ypair, confBound): - """ n - calibration for conformal prediction - """ - cscores = list() - ymax = None - ymin = None - for yp, ya in ypair: - cscore = abs(yp - ya) - cscores.append(cscore) - if ymax is None: - ymax = ya - ymin = ya - else: - ymax = ya if ya > ymax else ymax - ymin = ya if ya < ymin else ymin - - cscores.sort() - drawHist(cscores, "conformal score distribution", "conformal score", "frequency", 20) - cbi = int(confBound * len(cscores)) - scoreConfBound = cscores[cbi] - self.calibration["scoreConfBound"] = scoreConfBound - self.calibration["ymin"] = ymin - self.calibration["ymax"] = ymax - print(self.calibration) - - def saveCalib(self, fPath): - """ - saves scoformal score calibration - """ - saveObject(self.calibration, fPath) - - def restoreCalib(self, fPath): - """ - saves scoformal score calibration - """ - self.calibration = restoreObject(fPath) - print(self.calibration) - - def getPredRange(self, yp, nstep=100): - """ - get prediction range and related data - """ - ymin = self.calibration["ymin"] - ymax = self.calibration["ymax"] - step = (ymax - ymin) / nstep - scoreConfBound = self.calibration["scoreConfBound"] - - rmin = None - rmax = None - rcount = 0 - #print(ymin, ymax, step) - for ya in np.arange(ymin, ymax, step): - cscore = abs(yp - ya) - if cscore < scoreConfBound: - if rmin is None: - #lower bound - rmin = ya - rmax = ya - else: - #keep updating upper bound - rmax = ya if ya > rmax else rmax - rcount += 1 - else: - if rmax is not None and rcount > 0: - #past upper bound - break - - res = dict() - res["predRangeMin"] = rmin - res["predRangeMax"] = rmax - accepted = yp >= rmin and yp <= rmax - res["status"] = "accepted" if accepted else "rejected" - conf = 1.0 - (rmax - rmin) / (ymax - ymin) - res["confidence"] = conf - - return res - - \ No newline at end of file diff --git a/spaces/ThomasSimonini/Deep-Reinforcement-Learning-Leaderboard/README.md b/spaces/ThomasSimonini/Deep-Reinforcement-Learning-Leaderboard/README.md deleted file mode 100644 index b7aa368ba5f99005946921594def462df4eb8049..0000000000000000000000000000000000000000 --- a/spaces/ThomasSimonini/Deep-Reinforcement-Learning-Leaderboard/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Deep Reinforcement Learning Leaderboard -emoji: 🚀 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false -duplicated_from: huggingface-projects/Deep-Reinforcement-Learning-Leaderboard ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/User1342/Ivory/README.md b/spaces/User1342/Ivory/README.md deleted file mode 100644 index da2a546316a3812196e503e548b426db61a9e9fc..0000000000000000000000000000000000000000 --- a/spaces/User1342/Ivory/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: WatchTower Ivory -emoji: 🐘 -colorFrom: Grey -colorTo: Black -sdk: gradio -sdk_version: 3.9.1 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/VinayHajare/Marathi-Audio-Transcriber-and-Translator/app.py b/spaces/VinayHajare/Marathi-Audio-Transcriber-and-Translator/app.py deleted file mode 100644 index a7ad3999d4ebc6dbabb6e673302e74aaeb6c5f19..0000000000000000000000000000000000000000 --- a/spaces/VinayHajare/Marathi-Audio-Transcriber-and-Translator/app.py +++ /dev/null @@ -1,140 +0,0 @@ -import torch -from transformers import pipeline -from transformers.pipelines.audio_utils import ffmpeg_read -import gradio as gr -import pytube as pt - -MODEL_NAME = "VinayHajare/whisper-small-finetuned-common-voice-mr" -BATCH_SIZE = 8 -LANG = "mr" -device = 0 if torch.cuda.is_available() else "cpu" - -pipe = pipeline( - task="automatic-speech-recognition", - model=MODEL_NAME, - chunk_length_s=30, - device=device, -) - -pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language=LANG) - -# Copied from https://github.com/openai/whisper/blob/c09a7ae299c4c34c5839a76380ae407e7d785914/whisper/utils.py#L50 -def format_timestamp(seconds: float, always_include_hours: bool = False, decimal_marker: str = "."): - if seconds is not None: - milliseconds = round(seconds * 1000.0) - - hours = milliseconds // 3_600_000 - milliseconds -= hours * 3_600_000 - - minutes = milliseconds // 60_000 - milliseconds -= minutes * 60_000 - - seconds = milliseconds // 1_000 - milliseconds -= seconds * 1_000 - - hours_marker = f"{hours:02d}:" if always_include_hours or hours > 0 else "" - return f"{hours_marker}{minutes:02d}:{seconds:02d}{decimal_marker}{milliseconds:03d}" - else: - # we have a malformed timestamp so just return it as is - return seconds - -def transcribe(file, task, return_timestamps): - outputs = pipe(file, batch_size=BATCH_SIZE, generate_kwargs={"task": task}, return_timestamps=return_timestamps) - text = outputs["text"] - if return_timestamps: - timestamps = outputs["chunks"] - timestamps = [ - f"[{format_timestamp(chunk['timestamp'][0])} -> {format_timestamp(chunk['timestamp'][1])}] {chunk['text']}" - for chunk in timestamps - ] - text = "\n".join(str(feature) for feature in timestamps) - return text - -def _return_yt_html_embed(yt_url): - video_id = yt_url.split("?v=")[-1] - HTML_str = ( - f'

' - "
" - ) - return HTML_str - -def yt_transcribe(yt_url, task, return_timestamps): - yt = pt.YouTube(yt_url) - html_embed_str = _return_yt_html_embed(yt_url) - stream = yt.streams.filter(only_audio=True)[0] - stream.download(filename="audio.mp3") - outputs = pipe("audio.mp3",batch_size=BATCH_SIZE, generate_kwargs={"task": task}, return_timestamps=return_timestamps) - text = outputs["text"] - if return_timestamps: - timestamps = outputs["chunks"] - timestamps = [ - f"[{format_timestamp(chunk['timestamp'][0])} -> {format_timestamp(chunk['timestamp'][1])}] {chunk['text']}" - for chunk in timestamps - ] - text = "\n".join(str(feature) for feature in timestamps) - return html_embed_str, text - -demo = gr.Blocks() - -mic_transcribe = gr.Interface( - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="microphone", type="filepath", optional=True), - gr.inputs.Radio(["transcribe", "translate"], label="Task", default="transcribe"), - gr.inputs.Checkbox(default=False, label="Return timestamps"), - ], - outputs="text", - layout="horizontal", - theme="huggingface", - title="Whisper Demo: Transcribe Marathi Audio", - description=( - "Transcribe long-form microphone or audio inputs with the click of a button! Demo uses the" - f" checkpoint [{MODEL_NAME}](https://huggingface.co/{MODEL_NAME}) and 🤗 Transformers to transcribe audio files" - " of arbitrary length." - ), - allow_flagging="never", -) - -file_transcribe = gr.Interface( - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="upload", optional=True, label="Audio file", type="filepath"), - gr.inputs.Radio(["transcribe", "translate"], label="Task", default="transcribe"), - gr.inputs.Checkbox(default=False, label="Return timestamps"), - ], - outputs="text", - layout="horizontal", - theme="huggingface", - title="Whisper Demo: Transcribe Marathi Audio", - description=( - "Transcribe long-form microphone or audio inputs with the click of a button! Demo uses the" - f" checkpoint [{MODEL_NAME}](https://huggingface.co/{MODEL_NAME}) and 🤗 Transformers to transcribe audio files" - " of arbitrary length." - ), - cache_examples=True, - allow_flagging="never", -) - -yt_transcribe = gr.Interface( - fn=yt_transcribe, - inputs=[ - gr.inputs.Textbox(lines=1, placeholder="Paste the URL to a YouTube video here", label="YouTube Video URL"), - gr.inputs.Radio(["transcribe", "translate"], label="Task", default="transcribe"), - gr.inputs.Checkbox(default=False, label="Return timestamps"), - ], - outputs=["html", "text"], - layout="horizontal", - theme="huggingface", - title="Whisper Demo: Transcribe Marathi YouTube Video", - description=( - "Transcribe long-form YouTube videos with the click of a button! Demo uses the the fine-tuned checkpoint:" - f" [{MODEL_NAME}](https://huggingface.co/{MODEL_NAME}) and 🤗 Transformers to transcribe audio files of" - " arbitrary length." - ), - allow_flagging="never", -) - -with demo: - gr.TabbedInterface([mic_transcribe, file_transcribe, yt_transcribe], ["Transcribe Microphone", "Transcribe Audio File", "Transcribe YouTube Video"]) - -demo.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/WangZeJun/bloom-820m-chat/README.md b/spaces/WangZeJun/bloom-820m-chat/README.md deleted file mode 100644 index ba12797d50b6ace72447720dfe13424fb2e6ce42..0000000000000000000000000000000000000000 --- a/spaces/WangZeJun/bloom-820m-chat/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Bloom 820m Chat -emoji: 👀 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false -license: bigscience-bloom-rail-1.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/utils/autocast.py b/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/utils/autocast.py deleted file mode 100644 index ed644843bb37cf8a92a20fbd51d6cebaa43b9a08..0000000000000000000000000000000000000000 --- a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/utils/autocast.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -class TorchAutocast: - """TorchAutocast utility class. - Allows you to enable and disable autocast. This is specially useful - when dealing with different architectures and clusters with different - levels of support. - - Args: - enabled (bool): Whether to enable torch.autocast or not. - args: Additional args for torch.autocast. - kwargs: Additional kwargs for torch.autocast - """ - def __init__(self, enabled: bool, *args, **kwargs): - self.autocast = torch.autocast(*args, **kwargs) if enabled else None - - def __enter__(self): - if self.autocast is None: - return - try: - self.autocast.__enter__() - except RuntimeError: - device = self.autocast.device - dtype = self.autocast.fast_dtype - raise RuntimeError( - f"There was an error autocasting with dtype={dtype} device={device}\n" - "If you are on the FAIR Cluster, you might need to use autocast_dtype=float16" - ) - - def __exit__(self, *args, **kwargs): - if self.autocast is None: - return - self.autocast.__exit__(*args, **kwargs) diff --git a/spaces/XzJosh/Azuma-Bert-VITS2/attentions.py b/spaces/XzJosh/Azuma-Bert-VITS2/attentions.py deleted file mode 100644 index 1192dd7268c20c11010e73a6017ed09549695afe..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Azuma-Bert-VITS2/attentions.py +++ /dev/null @@ -1,344 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import logging - -logger = logging.getLogger(__name__) - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, isflow = True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - #if isflow: - # cond_layer = torch.nn.Conv1d(256, 2*hidden_channels*n_layers, 1) - # self.cond_pre = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, 1) - # self.cond_layer = weight_norm(cond_layer, name='weight') - # self.gin_channels = 256 - self.cond_layer_idx = self.n_layers - if 'gin_channels' in kwargs: - self.gin_channels = kwargs['gin_channels'] - if self.gin_channels != 0: - self.spk_emb_linear = nn.Linear(self.gin_channels, self.hidden_channels) - # vits2 says 3rd block, so idx is 2 by default - self.cond_layer_idx = kwargs['cond_layer_idx'] if 'cond_layer_idx' in kwargs else 2 - logging.debug(self.gin_channels, self.cond_layer_idx) - assert self.cond_layer_idx < self.n_layers, 'cond_layer_idx should be less than n_layers' - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - def forward(self, x, x_mask, g=None): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - if i == self.cond_layer_idx and g is not None: - g = self.spk_emb_linear(g.transpose(1, 2)) - g = g.transpose(1, 2) - x = x + g - x = x * x_mask - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Yabo/ControlVideo/models/controlnet_unet_blocks.py b/spaces/Yabo/ControlVideo/models/controlnet_unet_blocks.py deleted file mode 100644 index 75a3bfb5d7994a682fe8896180dd614910a69a07..0000000000000000000000000000000000000000 --- a/spaces/Yabo/ControlVideo/models/controlnet_unet_blocks.py +++ /dev/null @@ -1,589 +0,0 @@ -# Adapted from https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unet_2d_blocks.py - -import torch -from torch import nn - -from .controlnet_attention import Transformer3DModel -from .resnet import Downsample3D, ResnetBlock3D, Upsample3D - - -def get_down_block( - down_block_type, - num_layers, - in_channels, - out_channels, - temb_channels, - add_downsample, - resnet_eps, - resnet_act_fn, - attn_num_head_channels, - resnet_groups=None, - cross_attention_dim=None, - downsample_padding=None, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - resnet_time_scale_shift="default", -): - down_block_type = down_block_type[7:] if down_block_type.startswith("UNetRes") else down_block_type - if down_block_type == "DownBlock3D": - return DownBlock3D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif down_block_type == "CrossAttnDownBlock3D": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnDownBlock3D") - return CrossAttnDownBlock3D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - add_downsample=add_downsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - downsample_padding=downsample_padding, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - raise ValueError(f"{down_block_type} does not exist.") - - -def get_up_block( - up_block_type, - num_layers, - in_channels, - out_channels, - prev_output_channel, - temb_channels, - add_upsample, - resnet_eps, - resnet_act_fn, - attn_num_head_channels, - resnet_groups=None, - cross_attention_dim=None, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - resnet_time_scale_shift="default", -): - up_block_type = up_block_type[7:] if up_block_type.startswith("UNetRes") else up_block_type - if up_block_type == "UpBlock3D": - return UpBlock3D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - elif up_block_type == "CrossAttnUpBlock3D": - if cross_attention_dim is None: - raise ValueError("cross_attention_dim must be specified for CrossAttnUpBlock3D") - return CrossAttnUpBlock3D( - num_layers=num_layers, - in_channels=in_channels, - out_channels=out_channels, - prev_output_channel=prev_output_channel, - temb_channels=temb_channels, - add_upsample=add_upsample, - resnet_eps=resnet_eps, - resnet_act_fn=resnet_act_fn, - resnet_groups=resnet_groups, - cross_attention_dim=cross_attention_dim, - attn_num_head_channels=attn_num_head_channels, - dual_cross_attention=dual_cross_attention, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - resnet_time_scale_shift=resnet_time_scale_shift, - ) - raise ValueError(f"{up_block_type} does not exist.") - - -class UNetMidBlock3DCrossAttn(nn.Module): - def __init__( - self, - in_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - output_scale_factor=1.0, - cross_attention_dim=1280, - dual_cross_attention=False, - use_linear_projection=False, - upcast_attention=False, - ): - super().__init__() - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - resnet_groups = resnet_groups if resnet_groups is not None else min(in_channels // 4, 32) - - # there is always at least one resnet - resnets = [ - ResnetBlock3D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ] - attentions = [] - - for _ in range(num_layers): - if dual_cross_attention: - raise NotImplementedError - attentions.append( - Transformer3DModel( - attn_num_head_channels, - in_channels // attn_num_head_channels, - in_channels=in_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - upcast_attention=upcast_attention, - ) - ) - resnets.append( - ResnetBlock3D( - in_channels=in_channels, - out_channels=in_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - def forward(self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None, cross_attention_kwargs=None): - hidden_states = self.resnets[0](hidden_states, temb) - for attn, resnet in zip(self.attentions, self.resnets[1:]): - hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample - hidden_states = resnet(hidden_states, temb) - - return hidden_states - - -class CrossAttnDownBlock3D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - downsample_padding=1, - add_downsample=True, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - ): - super().__init__() - resnets = [] - attentions = [] - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock3D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - if dual_cross_attention: - raise NotImplementedError - attentions.append( - Transformer3DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - ) - ) - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - Downsample3D( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, temb=None, encoder_hidden_states=None, attention_mask=None,cross_attention_kwargs=None): - output_states = () - - for resnet, attn in zip(self.resnets, self.attentions): - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module, return_dict=None): - def custom_forward(*inputs): - if return_dict is not None: - return module(*inputs, return_dict=return_dict) - else: - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - hidden_states = torch.utils.checkpoint.checkpoint( - create_custom_forward(attn, return_dict=False), - hidden_states, - encoder_hidden_states, - )[0] - else: - hidden_states = resnet(hidden_states, temb) - hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class DownBlock3D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_downsample=True, - downsample_padding=1, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - in_channels = in_channels if i == 0 else out_channels - resnets.append( - ResnetBlock3D( - in_channels=in_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_downsample: - self.downsamplers = nn.ModuleList( - [ - Downsample3D( - out_channels, use_conv=True, out_channels=out_channels, padding=downsample_padding, name="op" - ) - ] - ) - else: - self.downsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, temb=None): - output_states = () - - for resnet in self.resnets: - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - output_states += (hidden_states,) - - if self.downsamplers is not None: - for downsampler in self.downsamplers: - hidden_states = downsampler(hidden_states) - - output_states += (hidden_states,) - - return hidden_states, output_states - - -class CrossAttnUpBlock3D(nn.Module): - def __init__( - self, - in_channels: int, - out_channels: int, - prev_output_channel: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - attn_num_head_channels=1, - cross_attention_dim=1280, - output_scale_factor=1.0, - add_upsample=True, - dual_cross_attention=False, - use_linear_projection=False, - only_cross_attention=False, - upcast_attention=False, - ): - super().__init__() - resnets = [] - attentions = [] - - self.has_cross_attention = True - self.attn_num_head_channels = attn_num_head_channels - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock3D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - if dual_cross_attention: - raise NotImplementedError - attentions.append( - Transformer3DModel( - attn_num_head_channels, - out_channels // attn_num_head_channels, - in_channels=out_channels, - num_layers=1, - cross_attention_dim=cross_attention_dim, - norm_num_groups=resnet_groups, - use_linear_projection=use_linear_projection, - only_cross_attention=only_cross_attention, - upcast_attention=upcast_attention, - ) - ) - - self.attentions = nn.ModuleList(attentions) - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList([Upsample3D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward( - self, - hidden_states, - res_hidden_states_tuple, - temb=None, - encoder_hidden_states=None, - upsample_size=None, - attention_mask=None, - cross_attention_kwargs=None - ): - for resnet, attn in zip(self.resnets, self.attentions): - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module, return_dict=None): - def custom_forward(*inputs): - if return_dict is not None: - return module(*inputs, return_dict=return_dict) - else: - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - hidden_states = torch.utils.checkpoint.checkpoint( - create_custom_forward(attn, return_dict=False), - hidden_states, - encoder_hidden_states, - )[0] - else: - hidden_states = resnet(hidden_states, temb) - hidden_states = attn(hidden_states, encoder_hidden_states=encoder_hidden_states).sample - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states - - -class UpBlock3D(nn.Module): - def __init__( - self, - in_channels: int, - prev_output_channel: int, - out_channels: int, - temb_channels: int, - dropout: float = 0.0, - num_layers: int = 1, - resnet_eps: float = 1e-6, - resnet_time_scale_shift: str = "default", - resnet_act_fn: str = "swish", - resnet_groups: int = 32, - resnet_pre_norm: bool = True, - output_scale_factor=1.0, - add_upsample=True, - ): - super().__init__() - resnets = [] - - for i in range(num_layers): - res_skip_channels = in_channels if (i == num_layers - 1) else out_channels - resnet_in_channels = prev_output_channel if i == 0 else out_channels - - resnets.append( - ResnetBlock3D( - in_channels=resnet_in_channels + res_skip_channels, - out_channels=out_channels, - temb_channels=temb_channels, - eps=resnet_eps, - groups=resnet_groups, - dropout=dropout, - time_embedding_norm=resnet_time_scale_shift, - non_linearity=resnet_act_fn, - output_scale_factor=output_scale_factor, - pre_norm=resnet_pre_norm, - ) - ) - - self.resnets = nn.ModuleList(resnets) - - if add_upsample: - self.upsamplers = nn.ModuleList([Upsample3D(out_channels, use_conv=True, out_channels=out_channels)]) - else: - self.upsamplers = None - - self.gradient_checkpointing = False - - def forward(self, hidden_states, res_hidden_states_tuple, temb=None, upsample_size=None): - for resnet in self.resnets: - # pop res hidden states - res_hidden_states = res_hidden_states_tuple[-1] - res_hidden_states_tuple = res_hidden_states_tuple[:-1] - hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1) - - if self.training and self.gradient_checkpointing: - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs) - - return custom_forward - - hidden_states = torch.utils.checkpoint.checkpoint(create_custom_forward(resnet), hidden_states, temb) - else: - hidden_states = resnet(hidden_states, temb) - - if self.upsamplers is not None: - for upsampler in self.upsamplers: - hidden_states = upsampler(hidden_states, upsample_size) - - return hidden_states diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint_legacy.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint_legacy.py deleted file mode 100644 index 60d52eaa1ab4bc380e282067db6bf624589289cd..0000000000000000000000000000000000000000 --- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint_legacy.py +++ /dev/null @@ -1,623 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Callable, List, Optional, Union - -import numpy as np -import torch - -import PIL -from diffusers.utils import is_accelerate_available -from packaging import version -from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer - -from ...configuration_utils import FrozenDict -from ...models import AutoencoderKL, UNet2DConditionModel -from ...pipeline_utils import DiffusionPipeline -from ...schedulers import ( - DDIMScheduler, - DPMSolverMultistepScheduler, - EulerAncestralDiscreteScheduler, - EulerDiscreteScheduler, - LMSDiscreteScheduler, - PNDMScheduler, -) -from ...utils import PIL_INTERPOLATION, deprecate, logging -from . import StableDiffusionPipelineOutput -from .safety_checker import StableDiffusionSafetyChecker - - -logger = logging.get_logger(__name__) - - -def preprocess_image(image): - w, h = image.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h), resample=PIL_INTERPOLATION["lanczos"]) - image = np.array(image).astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image) - return 2.0 * image - 1.0 - - -def preprocess_mask(mask, scale_factor=8): - mask = mask.convert("L") - w, h = mask.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - mask = mask.resize((w // scale_factor, h // scale_factor), resample=PIL_INTERPOLATION["nearest"]) - mask = np.array(mask).astype(np.float32) / 255.0 - mask = np.tile(mask, (4, 1, 1)) - mask = mask[None].transpose(0, 1, 2, 3) # what does this step do? - mask = 1 - mask # repaint white, keep black - mask = torch.from_numpy(mask) - return mask - - -class StableDiffusionInpaintPipelineLegacy(DiffusionPipeline): - r""" - Pipeline for text-guided image inpainting using Stable Diffusion. *This is an experimental feature*. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPFeatureExtractor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - _optional_components = ["safety_checker", "feature_extractor"] - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.__init__ - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[ - DDIMScheduler, - PNDMScheduler, - LMSDiscreteScheduler, - EulerDiscreteScheduler, - EulerAncestralDiscreteScheduler, - DPMSolverMultistepScheduler, - ], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - requires_safety_checker: bool = True, - ): - super().__init__() - - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." - " `clip_sample` should be set to False in the configuration file. Please make sure to update the" - " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" - " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" - " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" - ) - deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["clip_sample"] = False - scheduler._internal_dict = FrozenDict(new_config) - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse( - version.parse(unet.config._diffusers_version).base_version - ) < version.parse("0.9.0.dev0") - is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 - if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: - deprecation_message = ( - "The configuration file of the unet has set the default `sample_size` to smaller than" - " 64 which seems highly unlikely .If you're checkpoint is a fine-tuned version of any of the" - " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" - " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" - " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" - " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" - " in the config might lead to incorrect results in future versions. If you have downloaded this" - " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" - " the `unet/config.json` file" - ) - deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(unet.config) - new_config["sample_size"] = 64 - unet._internal_dict = FrozenDict(new_config) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_attention_slicing - def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"): - r""" - Enable sliced attention computation. - - When this option is enabled, the attention module will split the input tensor in slices, to compute attention - in several steps. This is useful to save some memory in exchange for a small speed decrease. - - Args: - slice_size (`str` or `int`, *optional*, defaults to `"auto"`): - When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If - a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case, - `attention_head_dim` must be a multiple of `slice_size`. - """ - if slice_size == "auto": - if isinstance(self.unet.config.attention_head_dim, int): - # half the attention head size is usually a good trade-off between - # speed and memory - slice_size = self.unet.config.attention_head_dim // 2 - else: - # if `attention_head_dim` is a list, take the smallest head size - slice_size = min(self.unet.config.attention_head_dim) - - self.unet.set_attention_slice(slice_size) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_attention_slicing - def disable_attention_slicing(self): - r""" - Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go - back to computing attention in one step. - """ - # set slice_size = `None` to disable `attention slicing` - self.enable_attention_slicing(None) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_sequential_cpu_offload - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device(f"cuda:{gpu_id}") - - for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]: - if cpu_offloaded_model is not None: - cpu_offload(cpu_offloaded_model, device) - - if self.safety_checker is not None: - # TODO(Patrick) - there is currently a bug with cpu offload of nn.Parameter in accelerate - # fix by only offloading self.safety_checker for now - cpu_offload(self.safety_checker.vision_model, device) - - @property - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._encode_prompt - def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `list(int)`): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - """ - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="pt").input_ids - - if not torch.equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - text_embeddings = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - text_embeddings = text_embeddings[0] - - # duplicate text embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = text_embeddings.shape - text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1) - text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - uncond_embeddings = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - uncond_embeddings = uncond_embeddings[0] - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = uncond_embeddings.shape[1] - uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1) - uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - - return text_embeddings - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.run_safety_checker - def run_safety_checker(self, image, device, dtype): - if self.safety_checker is not None: - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - else: - has_nsfw_concept = None - return image, has_nsfw_concept - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline.check_inputs - def check_inputs(self, prompt, strength, callback_steps): - if not isinstance(prompt, str) and not isinstance(prompt, list): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if strength < 0 or strength > 1: - raise ValueError(f"The value of strength should in [1.0, 1.0] but is {strength}") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline.get_timesteps - def get_timesteps(self, num_inference_steps, strength, device): - # get the original timestep using init_timestep - offset = self.scheduler.config.get("steps_offset", 0) - init_timestep = int(num_inference_steps * strength) + offset - init_timestep = min(init_timestep, num_inference_steps) - - t_start = max(num_inference_steps - init_timestep + offset, 0) - timesteps = self.scheduler.timesteps[t_start:] - - return timesteps, num_inference_steps - t_start - - def prepare_latents(self, image, timestep, batch_size, num_images_per_prompt, dtype, device, generator): - image = image.to(device=self.device, dtype=dtype) - init_latent_dist = self.vae.encode(image).latent_dist - init_latents = init_latent_dist.sample(generator=generator) - init_latents = 0.18215 * init_latents - - # Expand init_latents for batch_size and num_images_per_prompt - init_latents = torch.cat([init_latents] * batch_size * num_images_per_prompt, dim=0) - init_latents_orig = init_latents - - # add noise to latents using the timesteps - noise = torch.randn(init_latents.shape, generator=generator, device=self.device, dtype=dtype) - init_latents = self.scheduler.add_noise(init_latents, noise, timestep) - latents = init_latents - return latents, init_latents_orig, noise - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - image: Union[torch.FloatTensor, PIL.Image.Image], - mask_image: Union[torch.FloatTensor, PIL.Image.Image], - strength: float = 0.8, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: Optional[float] = 0.0, - generator: Optional[torch.Generator] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: Optional[int] = 1, - **kwargs, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - image (`torch.FloatTensor` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, that will be used as the starting point for the - process. This is the image whose masked region will be inpainted. - mask_image (`torch.FloatTensor` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be - replaced by noise and therefore repainted, while black pixels will be preserved. If `mask_image` is a - PIL image, it will be converted to a single channel (luminance) before use. If it's a tensor, it should - contain one color channel (L) instead of 3, so the expected shape would be `(B, H, W, 1)`. - strength (`float`, *optional*, defaults to 0.8): - Conceptually, indicates how much to inpaint the masked area. Must be between 0 and 1. When `strength` - is 1, the denoising process will be run on the masked area for the full number of iterations specified - in `num_inference_steps`. `image` will be used as a reference for the masked area, adding more noise to - that region the larger the `strength`. If `strength` is 0, no inpainting will occur. - num_inference_steps (`int`, *optional*, defaults to 50): - The reference number of denoising steps. More denoising steps usually lead to a higher quality image at - the expense of slower inference. This parameter will be modulated by `strength`, as explained above. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - message = "Please use `image` instead of `init_image`." - init_image = deprecate("init_image", "0.12.0", message, take_from=kwargs) - image = init_image or image - - # 1. Check inputs - self.check_inputs(prompt, strength, callback_steps) - - # 2. Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompt - text_embeddings = self._encode_prompt( - prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt - ) - - # 4. Preprocess image and mask - if not isinstance(image, torch.FloatTensor): - image = preprocess_image(image) - - if not isinstance(mask_image, torch.FloatTensor): - mask_image = preprocess_mask(mask_image, self.vae_scale_factor) - - # 5. set timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength, device) - latent_timestep = timesteps[:1].repeat(batch_size * num_images_per_prompt) - - # 6. Prepare latent variables - # encode the init image into latents and scale the latents - latents, init_latents_orig, noise = self.prepare_latents( - image, latent_timestep, batch_size, num_images_per_prompt, text_embeddings.dtype, device, generator - ) - - # 7. Prepare mask latent - mask = mask_image.to(device=self.device, dtype=latents.dtype) - mask = torch.cat([mask] * batch_size * num_images_per_prompt) - - # 8. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 9. Denoising loop - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - # masking - init_latents_proper = self.scheduler.add_noise(init_latents_orig, noise, torch.tensor([t])) - - latents = (init_latents_proper * mask) + (latents * (1 - mask)) - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 10. Post-processing - image = self.decode_latents(latents) - - # 11. Run safety checker - image, has_nsfw_concept = self.run_safety_checker(image, device, text_embeddings.dtype) - - # 12. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/groundingdino/util/inference.py b/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/groundingdino/util/inference.py deleted file mode 100644 index 7c9b8a0b382f615bcda0ef8220f79afc0892e641..0000000000000000000000000000000000000000 --- a/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/groundingdino/util/inference.py +++ /dev/null @@ -1,257 +0,0 @@ -from typing import Tuple, List - -import re -import cv2 -import numpy as np -import supervision as sv -import torch -from PIL import Image -from torchvision.ops import box_convert - -import groundingdino.datasets.transforms as T -from groundingdino.models import build_model -from groundingdino.util.misc import clean_state_dict -from groundingdino.util.slconfig import SLConfig -from groundingdino.util.utils import get_phrases_from_posmap - -# ---------------------------------------------------------------------------------------------------------------------- -# OLD API -# ---------------------------------------------------------------------------------------------------------------------- - - -def preprocess_caption(caption: str) -> str: - result = caption.lower().strip() - if result.endswith("."): - return result - return result + "." - - -def load_model(model_config_path: str, model_checkpoint_path: str, device: str = "cuda"): - args = SLConfig.fromfile(model_config_path) - args.device = device - model = build_model(args) - checkpoint = torch.load(model_checkpoint_path, map_location="cpu") - model.load_state_dict(clean_state_dict(checkpoint["model"]), strict=False) - model.eval() - return model - - -def load_image(image_path: str) -> Tuple[np.array, torch.Tensor]: - transform = T.Compose( - [ - T.RandomResize([800], max_size=1333), - T.ToTensor(), - T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), - ] - ) - image_source = Image.open(image_path).convert("RGB") - image = np.asarray(image_source) - image_transformed, _ = transform(image_source, None) - return image, image_transformed - - -def predict( - model, - image: torch.Tensor, - caption: str, - box_threshold: float, - text_threshold: float, - device: str = "cuda" -) -> Tuple[torch.Tensor, torch.Tensor, List[str]]: - caption = preprocess_caption(caption=caption) - - model = model.to(device) - image = image.to(device) - - with torch.no_grad(): - outputs = model(image[None], captions=[caption]) - - prediction_logits = outputs["pred_logits"].cpu().sigmoid()[0] # prediction_logits.shape = (nq, 256) - prediction_boxes = outputs["pred_boxes"].cpu()[0] # prediction_boxes.shape = (nq, 4) - - mask = prediction_logits.max(dim=1)[0] > box_threshold - logits = prediction_logits[mask] # logits.shape = (n, 256) - boxes = prediction_boxes[mask] # boxes.shape = (n, 4) - - tokenizer = model.tokenizer - tokenized = tokenizer(caption) - - phrases = [ - get_phrases_from_posmap(logit > text_threshold, tokenized, tokenizer).replace('.', '') - for logit - in logits - ] - - return boxes, logits.max(dim=1)[0], phrases - - -def annotate(image_source: np.ndarray, boxes: torch.Tensor, logits: torch.Tensor, phrases: List[str]) -> np.ndarray: - h, w, _ = image_source.shape - boxes = boxes * torch.Tensor([w, h, w, h]) - xyxy = box_convert(boxes=boxes, in_fmt="cxcywh", out_fmt="xyxy").numpy() - detections = sv.Detections(xyxy=xyxy) - - labels = [ - f"{phrase} {logit:.2f}" - for phrase, logit - in zip(phrases, logits) - ] - - box_annotator = sv.BoxAnnotator() - annotated_frame = cv2.cvtColor(image_source, cv2.COLOR_RGB2BGR) - annotated_frame = box_annotator.annotate(scene=annotated_frame, detections=detections, labels=labels) - return annotated_frame - - -# ---------------------------------------------------------------------------------------------------------------------- -# NEW API -# ---------------------------------------------------------------------------------------------------------------------- - - -class Model: - - def __init__( - self, - model_config_path: str, - model_checkpoint_path: str, - device: str = "cuda" - ): - self.model = load_model( - model_config_path=model_config_path, - model_checkpoint_path=model_checkpoint_path, - device=device - ).to(device) - self.device = device - - def predict_with_caption( - self, - image: np.ndarray, - caption: str, - box_threshold: float = 0.35, - text_threshold: float = 0.25 - ) -> Tuple[sv.Detections, List[str]]: - """ - import cv2 - - image = cv2.imread(IMAGE_PATH) - - model = Model(model_config_path=CONFIG_PATH, model_checkpoint_path=WEIGHTS_PATH) - detections, labels = model.predict_with_caption( - image=image, - caption=caption, - box_threshold=BOX_THRESHOLD, - text_threshold=TEXT_THRESHOLD - ) - - import supervision as sv - - box_annotator = sv.BoxAnnotator() - annotated_image = box_annotator.annotate(scene=image, detections=detections, labels=labels) - """ - processed_image = Model.preprocess_image(image_bgr=image).to(self.device) - boxes, logits, phrases = predict( - model=self.model, - image=processed_image, - caption=caption, - box_threshold=box_threshold, - text_threshold=text_threshold, - device=self.device) - source_h, source_w, _ = image.shape - detections = Model.post_process_result( - source_h=source_h, - source_w=source_w, - boxes=boxes, - logits=logits) - return detections, phrases - - def predict_with_classes( - self, - image: np.ndarray, - classes: List[str], - box_threshold: float, - text_threshold: float - ) -> sv.Detections: - """ - import cv2 - - image = cv2.imread(IMAGE_PATH) - - model = Model(model_config_path=CONFIG_PATH, model_checkpoint_path=WEIGHTS_PATH) - detections = model.predict_with_classes( - image=image, - classes=CLASSES, - box_threshold=BOX_THRESHOLD, - text_threshold=TEXT_THRESHOLD - ) - - - import supervision as sv - - box_annotator = sv.BoxAnnotator() - annotated_image = box_annotator.annotate(scene=image, detections=detections) - """ - caption = ". ".join(classes) - processed_image = Model.preprocess_image(image_bgr=image).to(self.device) - boxes, logits, phrases = predict( - model=self.model, - image=processed_image, - caption=caption, - box_threshold=box_threshold, - text_threshold=text_threshold, - device=self.device) - source_h, source_w, _ = image.shape - detections = Model.post_process_result( - source_h=source_h, - source_w=source_w, - boxes=boxes, - logits=logits) - class_id = Model.phrases2classes(phrases=phrases, classes=classes) - detections.class_id = class_id - return detections - - @staticmethod - def preprocess_image(image_bgr: np.ndarray) -> torch.Tensor: - transform = T.Compose( - [ - T.RandomResize([800], max_size=1333), - T.ToTensor(), - T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), - ] - ) - image_pillow = Image.fromarray(cv2.cvtColor(image_bgr, cv2.COLOR_BGR2RGB)) - image_transformed, _ = transform(image_pillow, None) - return image_transformed - - @staticmethod - def post_process_result( - source_h: int, - source_w: int, - boxes: torch.Tensor, - logits: torch.Tensor - ) -> sv.Detections: - boxes = boxes * torch.Tensor([source_w, source_h, source_w, source_h]) - xyxy = box_convert(boxes=boxes, in_fmt="cxcywh", out_fmt="xyxy").numpy() - confidence = logits.numpy() - return sv.Detections(xyxy=xyxy, confidence=confidence) - - @staticmethod - def phrases2classes(phrases: List[str], classes: List[str]) -> np.ndarray: - class_ids = [] - for phrase in phrases: - try: - # class_ids.append(classes.index(phrase)) - class_ids.append(Model.find_index(phrase, classes)) - except ValueError: - class_ids.append(None) - return np.array(class_ids) - - @staticmethod - def find_index(string, lst): - # if meet string like "lake river" will only keep "lake" - # this is an hack implementation for visualization which will be updated in the future - string = string.lower().split()[0] - for i, s in enumerate(lst): - if string in s.lower(): - return i - print("There's a wrong phrase happen, this is because of our post-process merged wrong tokens, which will be modified in the future. We will assign it with a random label at this time.") - return 0 \ No newline at end of file diff --git a/spaces/ZettaFi/SeeFood/app.py b/spaces/ZettaFi/SeeFood/app.py deleted file mode 100644 index 00608ed891e0944aab7bccf6722451ba4fce7577..0000000000000000000000000000000000000000 --- a/spaces/ZettaFi/SeeFood/app.py +++ /dev/null @@ -1,16 +0,0 @@ -from fastai.vision.all import * -import gradio as gr - -learn = load_learner("hotdogModel.pkl") - - -def classify_image(image: Image): - prediction, index, probability = learn.predict(image) - return "is_hotdog.png" if prediction == "hotdog" else "not_hotdog.png" - - -examples = ["examples/hotdog.jpg", "examples/burger.jpg", "examples/sandwich.jpg", "examples/dog.jpg", - "examples/hotdog_dog.jpg", "examples/fancy_hotdog.jpg"] - -iface = gr.Interface(fn=classify_image, inputs="image", outputs="image", examples=examples, allow_flagging="never") -iface.launch(inline=False) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/mask_heads/fcn_mask_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/mask_heads/fcn_mask_head.py deleted file mode 100644 index be6772fa6c471a7a65b77f2f18dfd217f4bd3289..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/roi_heads/mask_heads/fcn_mask_head.py +++ /dev/null @@ -1,377 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import Conv2d, ConvModule, build_upsample_layer -from mmcv.ops.carafe import CARAFEPack -from mmcv.runner import auto_fp16, force_fp32 -from torch.nn.modules.utils import _pair - -from mmdet.core import mask_target -from mmdet.models.builder import HEADS, build_loss - -BYTES_PER_FLOAT = 4 -# TODO: This memory limit may be too much or too little. It would be better to -# determine it based on available resources. -GPU_MEM_LIMIT = 1024**3 # 1 GB memory limit - - -@HEADS.register_module() -class FCNMaskHead(nn.Module): - - def __init__(self, - num_convs=4, - roi_feat_size=14, - in_channels=256, - conv_kernel_size=3, - conv_out_channels=256, - num_classes=80, - class_agnostic=False, - upsample_cfg=dict(type='deconv', scale_factor=2), - conv_cfg=None, - norm_cfg=None, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)): - super(FCNMaskHead, self).__init__() - self.upsample_cfg = upsample_cfg.copy() - if self.upsample_cfg['type'] not in [ - None, 'deconv', 'nearest', 'bilinear', 'carafe' - ]: - raise ValueError( - f'Invalid upsample method {self.upsample_cfg["type"]}, ' - 'accepted methods are "deconv", "nearest", "bilinear", ' - '"carafe"') - self.num_convs = num_convs - # WARN: roi_feat_size is reserved and not used - self.roi_feat_size = _pair(roi_feat_size) - self.in_channels = in_channels - self.conv_kernel_size = conv_kernel_size - self.conv_out_channels = conv_out_channels - self.upsample_method = self.upsample_cfg.get('type') - self.scale_factor = self.upsample_cfg.pop('scale_factor', None) - self.num_classes = num_classes - self.class_agnostic = class_agnostic - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.fp16_enabled = False - self.loss_mask = build_loss(loss_mask) - - self.convs = nn.ModuleList() - for i in range(self.num_convs): - in_channels = ( - self.in_channels if i == 0 else self.conv_out_channels) - padding = (self.conv_kernel_size - 1) // 2 - self.convs.append( - ConvModule( - in_channels, - self.conv_out_channels, - self.conv_kernel_size, - padding=padding, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg)) - upsample_in_channels = ( - self.conv_out_channels if self.num_convs > 0 else in_channels) - upsample_cfg_ = self.upsample_cfg.copy() - if self.upsample_method is None: - self.upsample = None - elif self.upsample_method == 'deconv': - upsample_cfg_.update( - in_channels=upsample_in_channels, - out_channels=self.conv_out_channels, - kernel_size=self.scale_factor, - stride=self.scale_factor) - self.upsample = build_upsample_layer(upsample_cfg_) - elif self.upsample_method == 'carafe': - upsample_cfg_.update( - channels=upsample_in_channels, scale_factor=self.scale_factor) - self.upsample = build_upsample_layer(upsample_cfg_) - else: - # suppress warnings - align_corners = (None - if self.upsample_method == 'nearest' else False) - upsample_cfg_.update( - scale_factor=self.scale_factor, - mode=self.upsample_method, - align_corners=align_corners) - self.upsample = build_upsample_layer(upsample_cfg_) - - out_channels = 1 if self.class_agnostic else self.num_classes - logits_in_channel = ( - self.conv_out_channels - if self.upsample_method == 'deconv' else upsample_in_channels) - self.conv_logits = Conv2d(logits_in_channel, out_channels, 1) - self.relu = nn.ReLU(inplace=True) - self.debug_imgs = None - - def init_weights(self): - for m in [self.upsample, self.conv_logits]: - if m is None: - continue - elif isinstance(m, CARAFEPack): - m.init_weights() - else: - nn.init.kaiming_normal_( - m.weight, mode='fan_out', nonlinearity='relu') - nn.init.constant_(m.bias, 0) - - @auto_fp16() - def forward(self, x): - for conv in self.convs: - x = conv(x) - if self.upsample is not None: - x = self.upsample(x) - if self.upsample_method == 'deconv': - x = self.relu(x) - mask_pred = self.conv_logits(x) - return mask_pred - - def get_targets(self, sampling_results, gt_masks, rcnn_train_cfg): - pos_proposals = [res.pos_bboxes for res in sampling_results] - pos_assigned_gt_inds = [ - res.pos_assigned_gt_inds for res in sampling_results - ] - mask_targets = mask_target(pos_proposals, pos_assigned_gt_inds, - gt_masks, rcnn_train_cfg) - return mask_targets - - @force_fp32(apply_to=('mask_pred', )) - def loss(self, mask_pred, mask_targets, labels): - """ - Example: - >>> from mmdet.models.roi_heads.mask_heads.fcn_mask_head import * # NOQA - >>> N = 7 # N = number of extracted ROIs - >>> C, H, W = 11, 32, 32 - >>> # Create example instance of FCN Mask Head. - >>> # There are lots of variations depending on the configuration - >>> self = FCNMaskHead(num_classes=C, num_convs=1) - >>> inputs = torch.rand(N, self.in_channels, H, W) - >>> mask_pred = self.forward(inputs) - >>> sf = self.scale_factor - >>> labels = torch.randint(0, C, size=(N,)) - >>> # With the default properties the mask targets should indicate - >>> # a (potentially soft) single-class label - >>> mask_targets = torch.rand(N, H * sf, W * sf) - >>> loss = self.loss(mask_pred, mask_targets, labels) - >>> print('loss = {!r}'.format(loss)) - """ - loss = dict() - if mask_pred.size(0) == 0: - loss_mask = mask_pred.sum() - else: - if self.class_agnostic: - loss_mask = self.loss_mask(mask_pred, mask_targets, - torch.zeros_like(labels)) - else: - loss_mask = self.loss_mask(mask_pred, mask_targets, labels) - loss['loss_mask'] = loss_mask - return loss - - def get_seg_masks(self, mask_pred, det_bboxes, det_labels, rcnn_test_cfg, - ori_shape, scale_factor, rescale): - """Get segmentation masks from mask_pred and bboxes. - - Args: - mask_pred (Tensor or ndarray): shape (n, #class, h, w). - For single-scale testing, mask_pred is the direct output of - model, whose type is Tensor, while for multi-scale testing, - it will be converted to numpy array outside of this method. - det_bboxes (Tensor): shape (n, 4/5) - det_labels (Tensor): shape (n, ) - rcnn_test_cfg (dict): rcnn testing config - ori_shape (Tuple): original image height and width, shape (2,) - scale_factor(float | Tensor): If ``rescale is True``, box - coordinates are divided by this scale factor to fit - ``ori_shape``. - rescale (bool): If True, the resulting masks will be rescaled to - ``ori_shape``. - - Returns: - list[list]: encoded masks. The c-th item in the outer list - corresponds to the c-th class. Given the c-th outer list, the - i-th item in that inner list is the mask for the i-th box with - class label c. - - Example: - >>> import mmcv - >>> from mmdet.models.roi_heads.mask_heads.fcn_mask_head import * # NOQA - >>> N = 7 # N = number of extracted ROIs - >>> C, H, W = 11, 32, 32 - >>> # Create example instance of FCN Mask Head. - >>> self = FCNMaskHead(num_classes=C, num_convs=0) - >>> inputs = torch.rand(N, self.in_channels, H, W) - >>> mask_pred = self.forward(inputs) - >>> # Each input is associated with some bounding box - >>> det_bboxes = torch.Tensor([[1, 1, 42, 42 ]] * N) - >>> det_labels = torch.randint(0, C, size=(N,)) - >>> rcnn_test_cfg = mmcv.Config({'mask_thr_binary': 0, }) - >>> ori_shape = (H * 4, W * 4) - >>> scale_factor = torch.FloatTensor((1, 1)) - >>> rescale = False - >>> # Encoded masks are a list for each category. - >>> encoded_masks = self.get_seg_masks( - >>> mask_pred, det_bboxes, det_labels, rcnn_test_cfg, ori_shape, - >>> scale_factor, rescale - >>> ) - >>> assert len(encoded_masks) == C - >>> assert sum(list(map(len, encoded_masks))) == N - """ - if isinstance(mask_pred, torch.Tensor): - mask_pred = mask_pred.sigmoid() - else: - mask_pred = det_bboxes.new_tensor(mask_pred) - - device = mask_pred.device - cls_segms = [[] for _ in range(self.num_classes) - ] # BG is not included in num_classes - bboxes = det_bboxes[:, :4] - labels = det_labels - - if rescale: - img_h, img_w = ori_shape[:2] - else: - if isinstance(scale_factor, float): - img_h = np.round(ori_shape[0] * scale_factor).astype(np.int32) - img_w = np.round(ori_shape[1] * scale_factor).astype(np.int32) - else: - w_scale, h_scale = scale_factor[0], scale_factor[1] - img_h = np.round(ori_shape[0] * h_scale.item()).astype( - np.int32) - img_w = np.round(ori_shape[1] * w_scale.item()).astype( - np.int32) - scale_factor = 1.0 - - if not isinstance(scale_factor, (float, torch.Tensor)): - scale_factor = bboxes.new_tensor(scale_factor) - bboxes = bboxes / scale_factor - - if torch.onnx.is_in_onnx_export(): - # TODO: Remove after F.grid_sample is supported. - from torchvision.models.detection.roi_heads \ - import paste_masks_in_image - masks = paste_masks_in_image(mask_pred, bboxes, ori_shape[:2]) - thr = rcnn_test_cfg.get('mask_thr_binary', 0) - if thr > 0: - masks = masks >= thr - return masks - - N = len(mask_pred) - # The actual implementation split the input into chunks, - # and paste them chunk by chunk. - if device.type == 'cpu': - # CPU is most efficient when they are pasted one by one with - # skip_empty=True, so that it performs minimal number of - # operations. - num_chunks = N - else: - # GPU benefits from parallelism for larger chunks, - # but may have memory issue - num_chunks = int( - np.ceil(N * img_h * img_w * BYTES_PER_FLOAT / GPU_MEM_LIMIT)) - assert (num_chunks <= - N), 'Default GPU_MEM_LIMIT is too small; try increasing it' - chunks = torch.chunk(torch.arange(N, device=device), num_chunks) - - threshold = rcnn_test_cfg.mask_thr_binary - im_mask = torch.zeros( - N, - img_h, - img_w, - device=device, - dtype=torch.bool if threshold >= 0 else torch.uint8) - - if not self.class_agnostic: - mask_pred = mask_pred[range(N), labels][:, None] - - for inds in chunks: - masks_chunk, spatial_inds = _do_paste_mask( - mask_pred[inds], - bboxes[inds], - img_h, - img_w, - skip_empty=device.type == 'cpu') - - if threshold >= 0: - masks_chunk = (masks_chunk >= threshold).to(dtype=torch.bool) - else: - # for visualization and debugging - masks_chunk = (masks_chunk * 255).to(dtype=torch.uint8) - - im_mask[(inds, ) + spatial_inds] = masks_chunk - - for i in range(N): - cls_segms[labels[i]].append(im_mask[i].detach().cpu().numpy()) - return cls_segms - - -def _do_paste_mask(masks, boxes, img_h, img_w, skip_empty=True): - """Paste instance masks according to boxes. - - This implementation is modified from - https://github.com/facebookresearch/detectron2/ - - Args: - masks (Tensor): N, 1, H, W - boxes (Tensor): N, 4 - img_h (int): Height of the image to be pasted. - img_w (int): Width of the image to be pasted. - skip_empty (bool): Only paste masks within the region that - tightly bound all boxes, and returns the results this region only. - An important optimization for CPU. - - Returns: - tuple: (Tensor, tuple). The first item is mask tensor, the second one - is the slice object. - If skip_empty == False, the whole image will be pasted. It will - return a mask of shape (N, img_h, img_w) and an empty tuple. - If skip_empty == True, only area around the mask will be pasted. - A mask of shape (N, h', w') and its start and end coordinates - in the original image will be returned. - """ - # On GPU, paste all masks together (up to chunk size) - # by using the entire image to sample the masks - # Compared to pasting them one by one, - # this has more operations but is faster on COCO-scale dataset. - device = masks.device - if skip_empty: - x0_int, y0_int = torch.clamp( - boxes.min(dim=0).values.floor()[:2] - 1, - min=0).to(dtype=torch.int32) - x1_int = torch.clamp( - boxes[:, 2].max().ceil() + 1, max=img_w).to(dtype=torch.int32) - y1_int = torch.clamp( - boxes[:, 3].max().ceil() + 1, max=img_h).to(dtype=torch.int32) - else: - x0_int, y0_int = 0, 0 - x1_int, y1_int = img_w, img_h - x0, y0, x1, y1 = torch.split(boxes, 1, dim=1) # each is Nx1 - - N = masks.shape[0] - - img_y = torch.arange( - y0_int, y1_int, device=device, dtype=torch.float32) + 0.5 - img_x = torch.arange( - x0_int, x1_int, device=device, dtype=torch.float32) + 0.5 - img_y = (img_y - y0) / (y1 - y0) * 2 - 1 - img_x = (img_x - x0) / (x1 - x0) * 2 - 1 - # img_x, img_y have shapes (N, w), (N, h) - if torch.isinf(img_x).any(): - inds = torch.where(torch.isinf(img_x)) - img_x[inds] = 0 - if torch.isinf(img_y).any(): - inds = torch.where(torch.isinf(img_y)) - img_y[inds] = 0 - - gx = img_x[:, None, :].expand(N, img_y.size(1), img_x.size(1)) - gy = img_y[:, :, None].expand(N, img_y.size(1), img_x.size(1)) - grid = torch.stack([gx, gy], dim=3) - - if torch.onnx.is_in_onnx_export(): - raise RuntimeError( - 'Exporting F.grid_sample from Pytorch to ONNX is not supported.') - img_masks = F.grid_sample( - masks.to(dtype=torch.float32), grid, align_corners=False) - - if skip_empty: - return img_masks[:, 0], (slice(y0_int, y1_int), slice(x0_int, x1_int)) - else: - return img_masks[:, 0], () diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/base_roi_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/base_roi_head.py deleted file mode 100644 index 2d61cc08007924c61b4a53d7fbc6e6fedfd68f08..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/base_roi_head.py +++ /dev/null @@ -1,103 +0,0 @@ -from abc import ABCMeta, abstractmethod - -import torch.nn as nn - -from ..builder import build_shared_head - - -class BaseRoIHead(nn.Module, metaclass=ABCMeta): - """Base class for RoIHeads.""" - - def __init__(self, - bbox_roi_extractor=None, - bbox_head=None, - mask_roi_extractor=None, - mask_head=None, - shared_head=None, - train_cfg=None, - test_cfg=None): - super(BaseRoIHead, self).__init__() - self.train_cfg = train_cfg - self.test_cfg = test_cfg - if shared_head is not None: - self.shared_head = build_shared_head(shared_head) - - if bbox_head is not None: - self.init_bbox_head(bbox_roi_extractor, bbox_head) - - if mask_head is not None: - self.init_mask_head(mask_roi_extractor, mask_head) - - self.init_assigner_sampler() - - @property - def with_bbox(self): - """bool: whether the RoI head contains a `bbox_head`""" - return hasattr(self, 'bbox_head') and self.bbox_head is not None - - @property - def with_mask(self): - """bool: whether the RoI head contains a `mask_head`""" - return hasattr(self, 'mask_head') and self.mask_head is not None - - @property - def with_shared_head(self): - """bool: whether the RoI head contains a `shared_head`""" - return hasattr(self, 'shared_head') and self.shared_head is not None - - @abstractmethod - def init_weights(self, pretrained): - """Initialize the weights in head. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - pass - - @abstractmethod - def init_bbox_head(self): - """Initialize ``bbox_head``""" - pass - - @abstractmethod - def init_mask_head(self): - """Initialize ``mask_head``""" - pass - - @abstractmethod - def init_assigner_sampler(self): - """Initialize assigner and sampler.""" - pass - - @abstractmethod - def forward_train(self, - x, - img_meta, - proposal_list, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None, - gt_masks=None, - **kwargs): - """Forward function during training.""" - - async def async_simple_test(self, x, img_meta, **kwargs): - """Asynchronized test function.""" - raise NotImplementedError - - def simple_test(self, - x, - proposal_list, - img_meta, - proposals=None, - rescale=False, - **kwargs): - """Test without augmentation.""" - - def aug_test(self, x, proposal_list, img_metas, rescale=False, **kwargs): - """Test with augmentations. - - If rescale is False, then returned bboxes and masks will fit the scale - of imgs[0]. - """ diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/utils/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/utils/__init__.py deleted file mode 100644 index a263e31c1e3977712827ca229bbc04910b4e928e..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/utils/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .flops_counter import get_model_complexity_info -from .fuse_conv_bn import fuse_conv_bn -from .sync_bn import revert_sync_batchnorm -from .weight_init import (INITIALIZERS, Caffe2XavierInit, ConstantInit, - KaimingInit, NormalInit, PretrainedInit, - TruncNormalInit, UniformInit, XavierInit, - bias_init_with_prob, caffe2_xavier_init, - constant_init, initialize, kaiming_init, normal_init, - trunc_normal_init, uniform_init, xavier_init) - -__all__ = [ - 'get_model_complexity_info', 'bias_init_with_prob', 'caffe2_xavier_init', - 'constant_init', 'kaiming_init', 'normal_init', 'trunc_normal_init', - 'uniform_init', 'xavier_init', 'fuse_conv_bn', 'initialize', - 'INITIALIZERS', 'ConstantInit', 'XavierInit', 'NormalInit', - 'TruncNormalInit', 'UniformInit', 'KaimingInit', 'PretrainedInit', - 'Caffe2XavierInit', 'revert_sync_batchnorm' -] diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/image/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/image/__init__.py deleted file mode 100644 index d0051d609d3de4e7562e3fe638335c66617c4d91..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/image/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .colorspace import (bgr2gray, bgr2hls, bgr2hsv, bgr2rgb, bgr2ycbcr, - gray2bgr, gray2rgb, hls2bgr, hsv2bgr, imconvert, - rgb2bgr, rgb2gray, rgb2ycbcr, ycbcr2bgr, ycbcr2rgb) -from .geometric import (cutout, imcrop, imflip, imflip_, impad, - impad_to_multiple, imrescale, imresize, imresize_like, - imresize_to_multiple, imrotate, imshear, imtranslate, - rescale_size) -from .io import imfrombytes, imread, imwrite, supported_backends, use_backend -from .misc import tensor2imgs -from .photometric import (adjust_brightness, adjust_color, adjust_contrast, - adjust_lighting, adjust_sharpness, auto_contrast, - clahe, imdenormalize, imequalize, iminvert, - imnormalize, imnormalize_, lut_transform, posterize, - solarize) - -__all__ = [ - 'bgr2gray', 'bgr2hls', 'bgr2hsv', 'bgr2rgb', 'gray2bgr', 'gray2rgb', - 'hls2bgr', 'hsv2bgr', 'imconvert', 'rgb2bgr', 'rgb2gray', 'imrescale', - 'imresize', 'imresize_like', 'imresize_to_multiple', 'rescale_size', - 'imcrop', 'imflip', 'imflip_', 'impad', 'impad_to_multiple', 'imrotate', - 'imfrombytes', 'imread', 'imwrite', 'supported_backends', 'use_backend', - 'imdenormalize', 'imnormalize', 'imnormalize_', 'iminvert', 'posterize', - 'solarize', 'rgb2ycbcr', 'bgr2ycbcr', 'ycbcr2rgb', 'ycbcr2bgr', - 'tensor2imgs', 'imshear', 'imtranslate', 'adjust_color', 'imequalize', - 'adjust_brightness', 'adjust_contrast', 'lut_transform', 'clahe', - 'adjust_sharpness', 'auto_contrast', 'cutout', 'adjust_lighting' -] diff --git a/spaces/abionchito/rvc-models/infer_pack/modules.py b/spaces/abionchito/rvc-models/infer_pack/modules.py deleted file mode 100644 index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000 --- a/spaces/abionchito/rvc-models/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/app/win32.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/app/win32.py deleted file mode 100644 index 2549ebf052917225a72ba183124dde560565d7fe..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/app/win32.py +++ /dev/null @@ -1,118 +0,0 @@ -import ctypes - -from .base import PlatformEventLoop - -from pyglet.libs.win32 import _kernel32, _user32, types, constants -from pyglet.libs.win32.types import * - - -class Win32EventLoop(PlatformEventLoop): - def __init__(self): - super().__init__() - - self._next_idle_time = None - - # Force immediate creation of an event queue on this thread -- note - # that since event loop is created on pyglet.app import, whatever - # imports pyglet.app _must_ own the main run loop. - msg = types.MSG() - _user32.PeekMessageW(ctypes.byref(msg), 0, - constants.WM_USER, constants.WM_USER, - constants.PM_NOREMOVE) - - self._event_thread = _kernel32.GetCurrentThreadId() - - self._wait_objects = [] - self._recreate_wait_objects_array() - - self._timer_proc = types.TIMERPROC(self._timer_proc_func) - self._timer = _user32.SetTimer(0, 0, constants.USER_TIMER_MAXIMUM, self._timer_proc) - self._timer_func = None - - # Windows Multimedia timer precision functions - # https://learn.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timebeginperiod - self._winmm = ctypes.windll.LoadLibrary('winmm') - timecaps = TIMECAPS() - self._winmm.timeGetDevCaps(ctypes.byref(timecaps), ctypes.sizeof(timecaps)) - self._timer_precision = min(max(1, timecaps.wPeriodMin), timecaps.wPeriodMax) - - def add_wait_object(self, obj, func): - self._wait_objects.append((obj, func)) - self._recreate_wait_objects_array() - - def remove_wait_object(self, obj): - for i, (_object, _) in enumerate(self._wait_objects): - if obj == _object: - del self._wait_objects[i] - break - self._recreate_wait_objects_array() - - def _recreate_wait_objects_array(self): - if not self._wait_objects: - self._wait_objects_n = 0 - self._wait_objects_array = None - return - - self._wait_objects_n = len(self._wait_objects) - self._wait_objects_array = (HANDLE * self._wait_objects_n)(*[o for o, f in self._wait_objects]) - - def start(self): - if _kernel32.GetCurrentThreadId() != self._event_thread: - raise RuntimeError('EventLoop.run() must be called from the same ' + - 'thread that imports pyglet.app') - - self._timer_func = None - - self._winmm.timeBeginPeriod(self._timer_precision) - - def step(self, timeout=None): - self.dispatch_posted_events() - - msg = types.MSG() - if timeout is None: - timeout = constants.INFINITE - else: - timeout = int(timeout * 1000) # milliseconds - - result = _user32.MsgWaitForMultipleObjects( - self._wait_objects_n, - self._wait_objects_array, - False, - timeout, - constants.QS_ALLINPUT) - result -= constants.WAIT_OBJECT_0 - - if result == self._wait_objects_n: - while _user32.PeekMessageW(ctypes.byref(msg), - 0, 0, 0, constants.PM_REMOVE): - _user32.TranslateMessage(ctypes.byref(msg)) - _user32.DispatchMessageW(ctypes.byref(msg)) - elif 0 <= result < self._wait_objects_n: - obj, func = self._wait_objects[result] - func() - - # Return True if timeout was interrupted. - return result <= self._wait_objects_n - - def stop(self): - self._winmm.timeEndPeriod(self._timer_precision) - - def notify(self): - # Nudge the event loop with a message it will discard. Note that only - # user events are actually posted. The posted event will not - # interrupt the window move/size drag loop -- it seems there's no way - # to do this. - _user32.PostThreadMessageW(self._event_thread, constants.WM_USER, 0, 0) - - def set_timer(self, func, interval): - if func is None or interval is None: - interval = constants.USER_TIMER_MAXIMUM - else: - interval = int(interval * 1000) # milliseconds - - self._timer_func = func - _user32.SetTimer(0, self._timer, interval, self._timer_proc) - - def _timer_proc_func(self, hwnd, msg, timer, t): - if self._timer_func: - self._timer_func() diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/drivers/directsound/adaptation.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/drivers/directsound/adaptation.py deleted file mode 100644 index af53dabce21066681458217ef8ddc8c6e15f7240..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/media/drivers/directsound/adaptation.py +++ /dev/null @@ -1,403 +0,0 @@ -import math -import ctypes - -from . import interface -from pyglet.util import debug_print -from pyglet.media.mediathreads import PlayerWorkerThread -from pyglet.media.drivers.base import AbstractAudioDriver, AbstractAudioPlayer, MediaEvent -from pyglet.media.drivers.listener import AbstractListener - -_debug = debug_print('debug_media') - - -def _convert_coordinates(coordinates): - x, y, z = coordinates - return x, y, -z - - -def _gain2db(gain): - """ - Convert linear gain in range [0.0, 1.0] to 100ths of dB. - - Power gain = P1/P2 - dB = 2 log(P1/P2) - dB * 100 = 1000 * log(power gain) - """ - if gain <= 0: - return -10000 - return max(-10000, min(int(1000 * math.log2(min(gain, 1))), 0)) - - -def _db2gain(db): - """Convert 100ths of dB to linear gain.""" - return math.pow(10.0, float(db)/1000.0) - - -class DirectSoundAudioPlayer(AbstractAudioPlayer): - # Need to cache these because pyglet API allows update separately, but - # DSound requires both to be set at once. - _cone_inner_angle = 360 - _cone_outer_angle = 360 - - min_buffer_size = 9600 - - def __init__(self, driver, ds_driver, source, player): - super(DirectSoundAudioPlayer, self).__init__(source, player) - - # We keep here a strong reference because the AudioDriver is anyway - # a singleton object which will only be deleted when the application - # shuts down. The AudioDriver does not keep a ref to the AudioPlayer. - self.driver = driver - self._ds_driver = ds_driver - - # Desired play state (may be actually paused due to underrun -- not - # implemented yet). - self._playing = False - - # Up to one audio data may be buffered if too much data was received - # from the source that could not be written immediately into the - # buffer. See _refill(). - self._audiodata_buffer = None - - # Theoretical write and play cursors for an infinite buffer. play - # cursor is always <= write cursor (when equal, underrun is - # happening). - self._write_cursor = 0 - self._play_cursor = 0 - - # Cursor position of end of data. Silence is written after - # eos for one buffer size. - self._eos_cursor = None - - # Indexes into DSound circular buffer. Complications ensue wrt each - # other to avoid writing over the play cursor. See _get_write_size and - # write(). - self._play_cursor_ring = 0 - self._write_cursor_ring = 0 - - # List of (play_cursor, MediaEvent), in sort order - self._events = [] - - # List of (cursor, timestamp), in sort order (cursor gives expiry - # place of the timestamp) - self._timestamps = [] - - audio_format = source.audio_format - - # DSound buffer - self._ds_buffer = self._ds_driver.create_buffer(audio_format) - self._buffer_size = self._ds_buffer.buffer_size - - self._ds_buffer.current_position = 0 - - self._refill(self._buffer_size) - - def __del__(self): - # We decrease the IDirectSound refcount - self.driver._ds_driver._native_dsound.Release() - - def delete(self): - self.driver.worker.remove(self) - - def play(self): - assert _debug('DirectSound play') - self.driver.worker.add(self) - - if not self._playing: - self._get_audiodata() # prebuffer if needed - self._playing = True - self._ds_buffer.play() - - assert _debug('return DirectSound play') - - def stop(self): - assert _debug('DirectSound stop') - self.driver.worker.remove(self) - - if self._playing: - self._playing = False - self._ds_buffer.stop() - - assert _debug('return DirectSound stop') - - def clear(self): - assert _debug('DirectSound clear') - super(DirectSoundAudioPlayer, self).clear() - self._ds_buffer.current_position = 0 - self._play_cursor_ring = self._write_cursor_ring = 0 - self._play_cursor = self._write_cursor - self._eos_cursor = None - self._audiodata_buffer = None - del self._events[:] - del self._timestamps[:] - - def refill_buffer(self): - write_size = self._get_write_size() - if write_size > self.min_buffer_size: - self._refill(write_size) - return True - return False - - def _refill(self, write_size): - while write_size > 0: - assert _debug('_refill, write_size =', write_size) - audio_data = self._get_audiodata() - - if audio_data is not None: - assert _debug('write', audio_data.length) - length = min(write_size, audio_data.length) - self.write(audio_data, length) - write_size -= length - else: - assert _debug('write silence') - self.write(None, write_size) - write_size = 0 - - def _has_underrun(self): - return (self._eos_cursor is not None - and self._play_cursor > self._eos_cursor) - - def _dispatch_new_event(self, event_name): - MediaEvent(event_name).sync_dispatch_to_player(self.player) - - def _get_audiodata(self): - if self._audiodata_buffer is None or self._audiodata_buffer.length == 0: - self._get_new_audiodata() - - return self._audiodata_buffer - - def _get_new_audiodata(self): - assert _debug('Getting new audio data buffer.') - # Pass a reference of ourself to allow the audio decoding to get time - # information for synchronization. - compensation_time = self.get_audio_time_diff() - self._audiodata_buffer = self.source.get_audio_data(self._buffer_size, compensation_time) - - if self._audiodata_buffer is not None: - assert _debug('New audio data available: {} bytes'.format(self._audiodata_buffer.length)) - - if self._eos_cursor is not None: - self._move_write_cursor_after_eos() - - self._add_audiodata_events(self._audiodata_buffer) - self._add_audiodata_timestamp(self._audiodata_buffer) - self._eos_cursor = None - elif self._eos_cursor is None: - assert _debug('No more audio data.') - self._eos_cursor = self._write_cursor - - def _move_write_cursor_after_eos(self): - # Set the write cursor back to eos_cursor or play_cursor to prevent gaps - if self._play_cursor < self._eos_cursor: - cursor_diff = self._write_cursor - self._eos_cursor - assert _debug('Moving cursor back', cursor_diff) - self._write_cursor = self._eos_cursor - self._write_cursor_ring -= cursor_diff - self._write_cursor_ring %= self._buffer_size - - else: - cursor_diff = self._play_cursor - self._eos_cursor - assert _debug('Moving cursor back', cursor_diff) - self._write_cursor = self._play_cursor - self._write_cursor_ring -= cursor_diff - self._write_cursor_ring %= self._buffer_size - - def _add_audiodata_events(self, audio_data): - for event in audio_data.events: - event_cursor = self._write_cursor + event.timestamp * \ - self.source.audio_format.bytes_per_second - assert _debug('Adding event', event, 'at', event_cursor) - self._events.append((event_cursor, event)) - - def _add_audiodata_timestamp(self, audio_data): - ts_cursor = self._write_cursor + audio_data.length - self._timestamps.append( - (ts_cursor, audio_data.timestamp + audio_data.duration)) - - def update_play_cursor(self): - play_cursor_ring = self._ds_buffer.current_position.play_cursor - if play_cursor_ring < self._play_cursor_ring: - # Wrapped around - self._play_cursor += self._buffer_size - self._play_cursor_ring - self._play_cursor_ring = 0 - self._play_cursor += play_cursor_ring - self._play_cursor_ring - self._play_cursor_ring = play_cursor_ring - - self._dispatch_pending_events() - self._cleanup_timestamps() - self._check_underrun() - - def _dispatch_pending_events(self): - pending_events = [] - while self._events and self._events[0][0] <= self._play_cursor: - _, event = self._events.pop(0) - pending_events.append(event) - assert _debug('Dispatching pending events: {}'.format(pending_events)) - assert _debug('Remaining events: {}'.format(self._events)) - - for event in pending_events: - event._sync_dispatch_to_player(self.player) - - def _cleanup_timestamps(self): - while self._timestamps and self._timestamps[0][0] < self._play_cursor: - del self._timestamps[0] - - def _check_underrun(self): - if self._playing and self._has_underrun(): - assert _debug('underrun, stopping') - self.stop() - self._dispatch_new_event('on_eos') - - def _get_write_size(self): - self.update_play_cursor() - - play_cursor = self._play_cursor - write_cursor = self._write_cursor - - return self._buffer_size - max(write_cursor - play_cursor, 0) - - def write(self, audio_data, length): - # Pass audio_data=None to write silence - if length == 0: - return 0 - - write_ptr = self._ds_buffer.lock(self._write_cursor_ring, length) - assert 0 < length <= self._buffer_size - assert length == write_ptr.audio_length_1.value + write_ptr.audio_length_2.value - - if audio_data: - ctypes.memmove(write_ptr.audio_ptr_1, audio_data.data, write_ptr.audio_length_1.value) - audio_data.consume(write_ptr.audio_length_1.value, self.source.audio_format) - if write_ptr.audio_length_2.value > 0: - ctypes.memmove(write_ptr.audio_ptr_2, audio_data.data, write_ptr.audio_length_2.value) - audio_data.consume(write_ptr.audio_length_2.value, self.source.audio_format) - else: - if self.source.audio_format.sample_size == 8: - c = 0x80 - else: - c = 0 - ctypes.memset(write_ptr.audio_ptr_1, c, write_ptr.audio_length_1.value) - if write_ptr.audio_length_2.value > 0: - ctypes.memset(write_ptr.audio_ptr_2, c, write_ptr.audio_length_2.value) - self._ds_buffer.unlock(write_ptr) - - self._write_cursor += length - self._write_cursor_ring += length - self._write_cursor_ring %= self._buffer_size - - def get_time(self): - self.update_play_cursor() - if self._timestamps: - cursor, ts = self._timestamps[0] - result = ts + (self._play_cursor - cursor) / float(self.source.audio_format.bytes_per_second) - else: - result = None - - return result - - def set_volume(self, volume): - self._ds_buffer.volume = _gain2db(volume) - - def set_position(self, position): - if self._ds_buffer.is3d: - self._ds_buffer.position = _convert_coordinates(position) - - def set_min_distance(self, min_distance): - if self._ds_buffer.is3d: - self._ds_buffer.min_distance = min_distance - - def set_max_distance(self, max_distance): - if self._ds_buffer.is3d: - self._ds_buffer.max_distance = max_distance - - def set_pitch(self, pitch): - frequency = int(pitch * self.source.audio_format.sample_rate) - self._ds_buffer.frequency = frequency - - def set_cone_orientation(self, cone_orientation): - if self._ds_buffer.is3d: - self._ds_buffer.cone_orientation = _convert_coordinates(cone_orientation) - - def set_cone_inner_angle(self, cone_inner_angle): - if self._ds_buffer.is3d: - self._cone_inner_angle = int(cone_inner_angle) - self._set_cone_angles() - - def set_cone_outer_angle(self, cone_outer_angle): - if self._ds_buffer.is3d: - self._cone_outer_angle = int(cone_outer_angle) - self._set_cone_angles() - - def _set_cone_angles(self): - inner = min(self._cone_inner_angle, self._cone_outer_angle) - outer = max(self._cone_inner_angle, self._cone_outer_angle) - self._ds_buffer.set_cone_angles(inner, outer) - - def set_cone_outer_gain(self, cone_outer_gain): - if self._ds_buffer.is3d: - volume = _gain2db(cone_outer_gain) - self._ds_buffer.cone_outside_volume = volume - - def prefill_audio(self): - write_size = self._get_write_size() - self._refill(write_size) - - -class DirectSoundDriver(AbstractAudioDriver): - def __init__(self): - self._ds_driver = interface.DirectSoundDriver() - self._ds_listener = self._ds_driver.create_listener() - - assert self._ds_driver is not None - assert self._ds_listener is not None - - self.worker = PlayerWorkerThread() - self.worker.start() - - def __del__(self): - self.delete() - - def create_audio_player(self, source, player): - assert self._ds_driver is not None - # We increase IDirectSound refcount for each AudioPlayer instantiated - # This makes sure the AudioPlayer still has a valid _native_dsound to - # clean-up itself during tear-down. - self._ds_driver._native_dsound.AddRef() - return DirectSoundAudioPlayer(self, self._ds_driver, source, player) - - def get_listener(self): - assert self._ds_driver is not None - assert self._ds_listener is not None - return DirectSoundListener(self._ds_listener, self._ds_driver.primary_buffer) - - def delete(self): - if hasattr(self, 'worker'): - self.worker.stop() - # Make sure the _ds_listener is deleted before the _ds_driver - self._ds_listener = None - - -class DirectSoundListener(AbstractListener): - def __init__(self, ds_listener, ds_buffer): - self._ds_listener = ds_listener - self._ds_buffer = ds_buffer - - def _set_volume(self, volume): - self._volume = volume - self._ds_buffer.volume = _gain2db(volume) - - def _set_position(self, position): - self._position = position - self._ds_listener.position = _convert_coordinates(position) - - def _set_forward_orientation(self, orientation): - self._forward_orientation = orientation - self._set_orientation() - - def _set_up_orientation(self, orientation): - self._up_orientation = orientation - self._set_orientation() - - def _set_orientation(self): - self._ds_listener.orientation = (_convert_coordinates(self._forward_orientation) - + _convert_coordinates(self._up_orientation)) diff --git a/spaces/akhaliq/ESPnet2-TTS/app.py b/spaces/akhaliq/ESPnet2-TTS/app.py deleted file mode 100644 index 1bdd693f636d643f4e8128f91980a0c397193a0d..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/ESPnet2-TTS/app.py +++ /dev/null @@ -1,102 +0,0 @@ -import gradio as gr -import time -import torch -import scipy.io.wavfile -from espnet2.bin.tts_inference import Text2Speech -from espnet2.utils.types import str_or_none - -tagen = 'kan-bayashi/ljspeech_vits' -vocoder_tagen = "none" - -text2speechen = Text2Speech.from_pretrained( - model_tag=str_or_none(tagen), - vocoder_tag=str_or_none(vocoder_tagen), - device="cpu", - # Only for Tacotron 2 & Transformer - threshold=0.5, - # Only for Tacotron 2 - minlenratio=0.0, - maxlenratio=10.0, - use_att_constraint=False, - backward_window=1, - forward_window=3, - # Only for FastSpeech & FastSpeech2 & VITS - speed_control_alpha=1.0, - # Only for VITS - noise_scale=0.333, - noise_scale_dur=0.333, -) - - -tagjp = 'kan-bayashi/jsut_full_band_vits_prosody' -vocoder_tagjp = 'none' - -text2speechjp = Text2Speech.from_pretrained( - model_tag=str_or_none(tagjp), - vocoder_tag=str_or_none(vocoder_tagjp), - device="cpu", - # Only for Tacotron 2 & Transformer - threshold=0.5, - # Only for Tacotron 2 - minlenratio=0.0, - maxlenratio=10.0, - use_att_constraint=False, - backward_window=1, - forward_window=3, - # Only for FastSpeech & FastSpeech2 & VITS - speed_control_alpha=1.0, - # Only for VITS - noise_scale=0.333, - noise_scale_dur=0.333, -) - -tagch = 'kan-bayashi/csmsc_full_band_vits' -vocoder_tagch = "none" - -text2speechch = Text2Speech.from_pretrained( - model_tag=str_or_none(tagch), - vocoder_tag=str_or_none(vocoder_tagch), - device="cpu", - # Only for Tacotron 2 & Transformer - threshold=0.5, - # Only for Tacotron 2 - minlenratio=0.0, - maxlenratio=10.0, - use_att_constraint=False, - backward_window=1, - forward_window=3, - # Only for FastSpeech & FastSpeech2 & VITS - speed_control_alpha=1.0, - # Only for VITS - noise_scale=0.333, - noise_scale_dur=0.333, -) - -def inference(text,lang): - with torch.no_grad(): - if lang == "english": - wav = text2speechen(text)["wav"] - scipy.io.wavfile.write("out.wav",text2speechen.fs , wav.view(-1).cpu().numpy()) - if lang == "chinese": - wav = text2speechch(text)["wav"] - scipy.io.wavfile.write("out.wav",text2speechch.fs , wav.view(-1).cpu().numpy()) - if lang == "japanese": - wav = text2speechjp(text)["wav"] - scipy.io.wavfile.write("out.wav",text2speechjp.fs , wav.view(-1).cpu().numpy()) - return "out.wav" -title = "ESPnet2-TTS" -description = "Gradio demo for ESPnet2-TTS: Extending the Edge of TTS Research. To use it, simply add your audio, or click one of the examples to load them. Read more at the links below." -article = "

ESPnet2-TTS: Extending the Edge of TTS Research | Github Repo

" - -examples=[['This paper describes ESPnet2-TTS, an end-to-end text-to-speech (E2E-TTS) toolkit. ESPnet2-TTS extends our earlier version, ESPnet-TTS, by adding many new features, including: on-the-fly flexible pre-processing, joint training with neural vocoders, and state-of-the-art TTS models with extensions like full-band E2E text-to-waveform modeling, which simplify the training pipeline and further enhance TTS performance. The unified design of our recipes enables users to quickly reproduce state-of-the-art E2E-TTS results',"english"],['レシピの統一された設計により、ユーザーは最先端のE2E-TTSの結果をすばやく再現できます。また、推論用の統合Pythonインターフェースで事前にトレーニングされたモデルを多数提供し、ユーザーがベースラインサンプルを生成してデモを構築するための迅速な手段を提供します。',"japanese"],['对英语和日语语料库的实验评估表明,我们提供的模型合成了与真实情况相当的话语,达到了最先进的水平',"chinese"]] - -gr.Interface( - inference, - [gr.inputs.Textbox(label="input text",lines=10),gr.inputs.Radio(choices=["english", "chinese", "japanese"], type="value", default="english", label="language")], - gr.outputs.Audio(type="file", label="Output"), - title=title, - description=description, - article=article, - enable_queue=True, - examples=examples - ).launch(debug=True) diff --git a/spaces/alamin655/websurfx/public/static/index.js b/spaces/alamin655/websurfx/public/static/index.js deleted file mode 100644 index 050829aac399e4e18c91b52fa6593a8f2df7f717..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/public/static/index.js +++ /dev/null @@ -1,25 +0,0 @@ -/** - * Selects the input element for the search box - * @type {HTMLInputElement} - */ -const searchBox = document.querySelector('input'); - -/** - * Redirects the user to the search results page with the query parameter - */ -function searchWeb() { - const query = searchBox.value.trim(); - if (query) { - window.location.href = `search?q=${encodeURIComponent(query)}`; - } -} - -/** - * Listens for the 'Enter' key press event on the search box and calls the searchWeb function - * @param {KeyboardEvent} e - The keyboard event object - */ -searchBox.addEventListener('keyup', (e) => { - if (e.key === 'Enter') { - searchWeb(); - } -}); diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/__init__.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/__init__.py deleted file mode 100644 index d1d82f157f884dc65160a41b436258d1aaf12e4c..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/html5lib/__init__.py +++ /dev/null @@ -1,35 +0,0 @@ -""" -HTML parsing library based on the `WHATWG HTML specification -`_. The parser is designed to be compatible with -existing HTML found in the wild and implements well-defined error recovery that -is largely compatible with modern desktop web browsers. - -Example usage:: - - from pip._vendor import html5lib - with open("my_document.html", "rb") as f: - tree = html5lib.parse(f) - -For convenience, this module re-exports the following names: - -* :func:`~.html5parser.parse` -* :func:`~.html5parser.parseFragment` -* :class:`~.html5parser.HTMLParser` -* :func:`~.treebuilders.getTreeBuilder` -* :func:`~.treewalkers.getTreeWalker` -* :func:`~.serializer.serialize` -""" - -from __future__ import absolute_import, division, unicode_literals - -from .html5parser import HTMLParser, parse, parseFragment -from .treebuilders import getTreeBuilder -from .treewalkers import getTreeWalker -from .serializer import serialize - -__all__ = ["HTMLParser", "parse", "parseFragment", "getTreeBuilder", - "getTreeWalker", "serialize"] - -# this has to be at the top level, see how setup.py parses this -#: Distribution version number. -__version__ = "1.1" diff --git a/spaces/allknowingroger/Image-Models-Test60/app.py b/spaces/allknowingroger/Image-Models-Test60/app.py deleted file mode 100644 index 79c7e052fc9196ffb42567d1c8687d922b9269bc..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test60/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "sophwats/tuned-toy-jensen", - "Yntec/a-ZovyaRemix", - "sophwats/out-dir", - "Yntec/a-ZovyaRPGV3VAE", - "bellagio-ai/Walter-person-xl-dreambooth", - "digiplay/wantan25D_prototype", - "digiplay/RealCartoon3D_v6", - "Kha37lid/khalid", - "livingbox/model-test-oct-23", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_multi_sine.c b/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_multi_sine.c deleted file mode 100644 index ec9ed8c1689a90206ecc69449028f38a970d27f7..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_multi_sine.c +++ /dev/null @@ -1,205 +0,0 @@ -/** @file patest_multi_sine.c - @ingroup test_src - @brief Play a different sine wave on each channel. - @author Phil Burk http://www.softsynth.com -*/ -/* - * $Id$ - * - * This program uses the PortAudio Portable Audio Library. - * For more information see: http://www.portaudio.com - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include -#include - -#include "portaudio.h" - -#define SAMPLE_RATE (44100) -#define FRAMES_PER_BUFFER (128) -#define FREQ_INCR (300.0 / SAMPLE_RATE) -#define MAX_CHANNELS (64) - -#ifndef M_PI -#define M_PI (3.14159265) -#endif - -typedef struct -{ - short interleaved; /* Nonzero for interleaved / zero for non-interleaved. */ - int numChannels; /* Actually used. */ - double phases[MAX_CHANNELS]; /* Each channel gets its' own frequency. */ -} -paTestData; - -/* This routine will be called by the PortAudio engine when audio is needed. -** It may called at interrupt level on some machines so don't do anything -** that could mess up the system like calling malloc() or free(). -*/ -static int patestCallback(const void* inputBuffer, - void* outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void* userData) -{ - int frameIndex, channelIndex; - float** outputs = (float**)outputBuffer; - paTestData* data = (paTestData*)userData; - - (void) inputBuffer; /* Prevent unused arg warning. */ - if (data->interleaved) - { - float *out = (float*)outputBuffer; /* interleaved version */ - for( frameIndex=0; frameIndex<(int)framesPerBuffer; frameIndex++ ) - { - for( channelIndex=0; channelIndexnumChannels; channelIndex++ ) - { - /* Output sine wave on every channel. */ - *out++ = (float) sin(data->phases[channelIndex]); - - /* Play each channel at a higher frequency. */ - data->phases[channelIndex] += FREQ_INCR * (4 + channelIndex); - if( data->phases[channelIndex] >= (2.0 * M_PI) ) data->phases[channelIndex] -= (2.0 * M_PI); - } - } - } - else - { - for( frameIndex=0; frameIndex<(int)framesPerBuffer; frameIndex++ ) - { - for( channelIndex=0; channelIndexnumChannels; channelIndex++ ) - { - /* Output sine wave on every channel. */ - outputs[channelIndex][frameIndex] = (float) sin(data->phases[channelIndex]); - - /* Play each channel at a higher frequency. */ - data->phases[channelIndex] += FREQ_INCR * (4 + channelIndex); - if( data->phases[channelIndex] >= (2.0 * M_PI) ) data->phases[channelIndex] -= (2.0 * M_PI); - } - } - } - return 0; -} - -/*******************************************************************/ -int test(short interleaved) -{ - PaStream* stream; - PaStreamParameters outputParameters; - PaError err; - const PaDeviceInfo* pdi; - paTestData data; - short n; - - outputParameters.device = Pa_GetDefaultOutputDevice(); /* Default output device, max channels. */ - if (outputParameters.device == paNoDevice) { - fprintf(stderr,"Error: No default output device.\n"); - return paInvalidDevice; - } - pdi = Pa_GetDeviceInfo(outputParameters.device); - outputParameters.channelCount = pdi->maxOutputChannels; - if (outputParameters.channelCount > MAX_CHANNELS) - outputParameters.channelCount = MAX_CHANNELS; - outputParameters.sampleFormat = paFloat32; /* 32 bit floating point output */ - outputParameters.suggestedLatency = pdi->defaultLowOutputLatency; - outputParameters.hostApiSpecificStreamInfo = NULL; - - data.interleaved = interleaved; - data.numChannels = outputParameters.channelCount; - for (n = 0; n < data.numChannels; n++) - data.phases[n] = 0.0; /* Phases wrap and maybe don't need initialisation. */ - printf("%d ", data.numChannels); - if (interleaved) - printf("interleaved "); - else - { - printf(" non-interleaved "); - outputParameters.sampleFormat |= paNonInterleaved; - } - printf("channels.\n"); - - err = Pa_OpenStream(&stream, - NULL, /* No input. */ - &outputParameters, - SAMPLE_RATE, /* Sample rate. */ - FRAMES_PER_BUFFER, /* Frames per buffer. */ - paClipOff, /* Samples never out of range, no clipping. */ - patestCallback, - &data); - if (err == paNoError) - { - err = Pa_StartStream(stream); - if (err == paNoError) - { - printf("Hit ENTER to stop this test.\n"); - getchar(); - err = Pa_StopStream(stream); - } - Pa_CloseStream( stream ); - } - return err; -} - - -/*******************************************************************/ -int main(void) -{ - PaError err; - - printf("PortAudio Test: output sine wave on each channel.\n" ); - - err = Pa_Initialize(); - if (err != paNoError) - goto done; - - err = test(1); /* 1 means interleaved. */ - if (err != paNoError) - goto done; - - err = test(0); /* 0 means not interleaved. */ - if (err != paNoError) - goto done; - - printf("Test finished.\n"); -done: - if (err) - { - fprintf(stderr, "An error occurred while using the portaudio stream\n"); - fprintf(stderr, "Error number: %d\n", err ); - fprintf(stderr, "Error message: %s\n", Pa_GetErrorText(err)); - } - Pa_Terminate(); - return 0; -} diff --git a/spaces/anaclaudia13ct/insect_detection/data/scripts/download_weights.sh b/spaces/anaclaudia13ct/insect_detection/data/scripts/download_weights.sh deleted file mode 100644 index e9fa65394178005ba42ad02b91fed2873effb66b..0000000000000000000000000000000000000000 --- a/spaces/anaclaudia13ct/insect_detection/data/scripts/download_weights.sh +++ /dev/null @@ -1,20 +0,0 @@ -#!/bin/bash -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -# Download latest models from https://github.com/ultralytics/yolov5/releases -# Example usage: bash path/to/download_weights.sh -# parent -# └── yolov5 -# ├── yolov5s.pt ← downloads here -# ├── yolov5m.pt -# └── ... - -python - </dev/null 2>&1 & -sleep 5 -open http://127.0.0.1:7860 -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal. \ No newline at end of file diff --git a/spaces/antonovmaxim/text-generation-webui-space/modules/utils.py b/spaces/antonovmaxim/text-generation-webui-space/modules/utils.py deleted file mode 100644 index 6722022d89003221980ed89cc9e9a0d5e1d7a429..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/modules/utils.py +++ /dev/null @@ -1,76 +0,0 @@ -import os -import re -from pathlib import Path - -from modules import shared - - -def atoi(text): - return int(text) if text.isdigit() else text.lower() - - -# Replace multiple string pairs in a string -def replace_all(text, dic): - for i, j in dic.items(): - text = text.replace(i, j) - - return text - - -def natural_keys(text): - return [atoi(c) for c in re.split(r'(\d+)', text)] - - -def get_available_models(): - if shared.args.flexgen: - return sorted([re.sub('-np$', '', item.name) for item in list(Path(f'{shared.args.model_dir}/').glob('*')) if item.name.endswith('-np')], key=natural_keys) - else: - return sorted([re.sub('.pth$', '', item.name) for item in list(Path(f'{shared.args.model_dir}/').glob('*')) if not item.name.endswith(('.txt', '-np', '.pt', '.json', '.yaml'))], key=natural_keys) - - -def get_available_presets(): - return sorted(set((k.stem for k in Path('presets').glob('*.txt'))), key=natural_keys) - - -def get_available_prompts(): - prompts = [] - files = set((k.stem for k in Path('prompts').glob('*.txt'))) - prompts += sorted([k for k in files if re.match('^[0-9]', k)], key=natural_keys, reverse=True) - prompts += sorted([k for k in files if re.match('^[^0-9]', k)], key=natural_keys) - prompts += ['Instruct-' + k for k in get_available_instruction_templates() if k != 'None'] - prompts += ['None'] - return prompts - - -def get_available_characters(): - paths = (x for x in Path('characters').iterdir() if x.suffix in ('.json', '.yaml', '.yml')) - return ['None'] + sorted(set((k.stem for k in paths if k.stem != "instruction-following")), key=natural_keys) - - -def get_available_instruction_templates(): - path = "characters/instruction-following" - paths = [] - if os.path.exists(path): - paths = (x for x in Path(path).iterdir() if x.suffix in ('.json', '.yaml', '.yml')) - - return ['None'] + sorted(set((k.stem for k in paths)), key=natural_keys) - - -def get_available_extensions(): - return sorted(set(map(lambda x: x.parts[1], Path('extensions').glob('*/script.py'))), key=natural_keys) - - -def get_available_softprompts(): - return ['None'] + sorted(set((k.stem for k in Path('softprompts').glob('*.zip'))), key=natural_keys) - - -def get_available_loras(): - return sorted([item.name for item in list(Path(shared.args.lora_dir).glob('*')) if not item.name.endswith(('.txt', '-np', '.pt', '.json'))], key=natural_keys) - - -def get_datasets(path: str, ext: str): - return ['None'] + sorted(set([k.stem for k in Path(path).glob(f'*.{ext}') if k.stem != 'put-trainer-datasets-here']), key=natural_keys) - - -def get_available_chat_styles(): - return sorted(set(('-'.join(k.stem.split('-')[1:]) for k in Path('css').glob('chat_style*.css'))), key=natural_keys) diff --git a/spaces/anzorq/hf-spaces-semantic-search/Dockerfile b/spaces/anzorq/hf-spaces-semantic-search/Dockerfile deleted file mode 100644 index 45d94852b94845797cf12b7b87e061de06f0e663..0000000000000000000000000000000000000000 --- a/spaces/anzorq/hf-spaces-semantic-search/Dockerfile +++ /dev/null @@ -1,59 +0,0 @@ -FROM node:18-alpine AS base - -# Install dependencies only when needed -FROM base AS deps -# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed. -RUN apk add --no-cache libc6-compat -WORKDIR /app - -# Install dependencies based on the preferred package manager -COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./ -RUN \ - if [ -f yarn.lock ]; then yarn --frozen-lockfile; \ - elif [ -f package-lock.json ]; then npm ci; \ - elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \ - else echo "Lockfile not found." && exit 1; \ - fi - - -# Rebuild the source code only when needed -FROM base AS builder -WORKDIR /app -COPY --from=deps /app/node_modules ./node_modules -COPY . . - -# Next.js collects completely anonymous telemetry data about general usage. -# Learn more here: https://nextjs.org/telemetry -# Uncomment the following line in case you want to disable telemetry during the build. -# ENV NEXT_TELEMETRY_DISABLED 1 - -RUN yarn build - -# If using npm comment out above and use below instead -# RUN npm run build - -# Production image, copy all the files and run next -FROM base AS runner -WORKDIR /app - -ENV NODE_ENV production -# Uncomment the following line in case you want to disable telemetry during runtime. -# ENV NEXT_TELEMETRY_DISABLED 1 - -RUN addgroup --system --gid 1001 nodejs -RUN adduser --system --uid 1001 nextjs - -COPY --from=builder /app/public ./public - -# Automatically leverage output traces to reduce image size -# https://nextjs.org/docs/advanced-features/output-file-tracing -COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ -COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static - -USER nextjs - -EXPOSE 3000 - -ENV PORT 3000 - -CMD ["node", "server.js"] \ No newline at end of file diff --git a/spaces/arkiitkgp/stablediff-demo/app.py b/spaces/arkiitkgp/stablediff-demo/app.py deleted file mode 100644 index 42527d2da785f21782d1cb40673335c3eb34f3a1..0000000000000000000000000000000000000000 --- a/spaces/arkiitkgp/stablediff-demo/app.py +++ /dev/null @@ -1,50 +0,0 @@ -import gradio as gr -from PIL import Image -import re -import os -import speech_recognition as sr - - -stable_diffusion = gr.Blocks.load(name="spaces/stabilityai/stable-diffusion") -r = sr.Recognizer() - -def transcribe(audio): - with sr.AudioFile(audio) as source: - audio_ = r.listen(source) - text = r.recognize_google(audio_)#, language = 'en-IN')# , show_all=True) - return text - - - -def get_images(prompt): - gallery_dir = stable_diffusion(prompt, fn_index=2) - return [os.path.join(gallery_dir, img) for img in os.listdir(gallery_dir)] - -with gr.Blocks() as demo: - - gr.Markdown("Stable diffusion magic -> Get the photo from whatever you can think of!") - with gr.Tab("Audio Input"): - audio_input = gr.Audio(source="microphone", type="filepath") - submit_audio_button = gr.Button("Convert to Image") - text_output = gr.Textbox(label="Recorded text") - - with gr.Tab("Text Input"): - text_input = gr.Textbox(label="Enter text") - submit_button_text = gr.Button("Convert to Image") - - - - # output = gr.Textbox(label="Output Box") - sd_output = gr.Gallery().style(grid=2, height="auto") - - submit_audio_button.click(fn=transcribe, inputs=audio_input, outputs=text_output) - text_output.change(fn=get_images, inputs=text_output, outputs=sd_output) - submit_button_text.click(fn=get_images, inputs=text_input, outputs=sd_output) - - -demo.launch() - - - - - diff --git a/spaces/arnavkartikeya/SCRIPture-final/data/nlvr_dataset.py b/spaces/arnavkartikeya/SCRIPture-final/data/nlvr_dataset.py deleted file mode 100644 index a8d6b2d7cd8d3260bd279c7dca80de53bacc691a..0000000000000000000000000000000000000000 --- a/spaces/arnavkartikeya/SCRIPture-final/data/nlvr_dataset.py +++ /dev/null @@ -1,78 +0,0 @@ -import os -import json -import random - -from torch.utils.data import Dataset -from torchvision.datasets.utils import download_url - -from PIL import Image - -from data.utils import pre_caption - -class nlvr_dataset(Dataset): - def __init__(self, transform, image_root, ann_root, split): - ''' - image_root (string): Root directory of images - ann_root (string): directory to store the annotation file - split (string): train, val or test - ''' - urls = {'train':'https://storage.googleapis.com/sfr-vision-language-research/datasets/nlvr_train.json', - 'val':'https://storage.googleapis.com/sfr-vision-language-research/datasets/nlvr_dev.json', - 'test':'https://storage.googleapis.com/sfr-vision-language-research/datasets/nlvr_test.json'} - filenames = {'train':'nlvr_train.json','val':'nlvr_dev.json','test':'nlvr_test.json'} - - download_url(urls[split],ann_root) - self.annotation = json.load(open(os.path.join(ann_root,filenames[split]),'r')) - - self.transform = transform - self.image_root = image_root - - - def __len__(self): - return len(self.annotation) - - - def __getitem__(self, index): - - ann = self.annotation[index] - - image0_path = os.path.join(self.image_root,ann['images'][0]) - image0 = Image.open(image0_path).convert('RGB') - image0 = self.transform(image0) - - image1_path = os.path.join(self.image_root,ann['images'][1]) - image1 = Image.open(image1_path).convert('RGB') - image1 = self.transform(image1) - - sentence = pre_caption(ann['sentence'], 40) - - if ann['label']=='True': - label = 1 - else: - label = 0 - - words = sentence.split(' ') - - if 'left' not in words and 'right' not in words: - if random.random()<0.5: - return image0, image1, sentence, label - else: - return image1, image0, sentence, label - else: - if random.random()<0.5: - return image0, image1, sentence, label - else: - new_words = [] - for word in words: - if word=='left': - new_words.append('right') - elif word=='right': - new_words.append('left') - else: - new_words.append(word) - - sentence = ' '.join(new_words) - return image1, image0, sentence, label - - - \ No newline at end of file diff --git a/spaces/artificialguybr/video-dubbing/Wav2Lip/temp/README.md b/spaces/artificialguybr/video-dubbing/Wav2Lip/temp/README.md deleted file mode 100644 index 04c910499300fa8dc05c317d7d30cb29f31ff836..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/Wav2Lip/temp/README.md +++ /dev/null @@ -1 +0,0 @@ -Temporary files at the time of inference/testing will be saved here. You can ignore them. \ No newline at end of file diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/PublicKey/ElGamal.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/PublicKey/ElGamal.py deleted file mode 100644 index 3b1084056b612768775d735d6f6bc5c7eece1d63..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/PublicKey/ElGamal.py +++ /dev/null @@ -1,286 +0,0 @@ -# -# ElGamal.py : ElGamal encryption/decryption and signatures -# -# Part of the Python Cryptography Toolkit -# -# Originally written by: A.M. Kuchling -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -__all__ = ['generate', 'construct', 'ElGamalKey'] - -from Crypto import Random -from Crypto.Math.Primality import ( generate_probable_safe_prime, - test_probable_prime, COMPOSITE ) -from Crypto.Math.Numbers import Integer - -# Generate an ElGamal key with N bits -def generate(bits, randfunc): - """Randomly generate a fresh, new ElGamal key. - - The key will be safe for use for both encryption and signature - (although it should be used for **only one** purpose). - - Args: - bits (int): - Key length, or size (in bits) of the modulus *p*. - The recommended value is 2048. - randfunc (callable): - Random number generation function; it should accept - a single integer *N* and return a string of random - *N* random bytes. - - Return: - an :class:`ElGamalKey` object - """ - - obj=ElGamalKey() - - # Generate a safe prime p - # See Algorithm 4.86 in Handbook of Applied Cryptography - obj.p = generate_probable_safe_prime(exact_bits=bits, randfunc=randfunc) - q = (obj.p - 1) >> 1 - - # Generate generator g - while 1: - # Choose a square residue; it will generate a cyclic group of order q. - obj.g = pow(Integer.random_range(min_inclusive=2, - max_exclusive=obj.p, - randfunc=randfunc), 2, obj.p) - - # We must avoid g=2 because of Bleichenbacher's attack described - # in "Generating ElGamal signatures without knowning the secret key", - # 1996 - if obj.g in (1, 2): - continue - - # Discard g if it divides p-1 because of the attack described - # in Note 11.67 (iii) in HAC - if (obj.p - 1) % obj.g == 0: - continue - - # g^{-1} must not divide p-1 because of Khadir's attack - # described in "Conditions of the generator for forging ElGamal - # signature", 2011 - ginv = obj.g.inverse(obj.p) - if (obj.p - 1) % ginv == 0: - continue - - # Found - break - - # Generate private key x - obj.x = Integer.random_range(min_inclusive=2, - max_exclusive=obj.p-1, - randfunc=randfunc) - # Generate public key y - obj.y = pow(obj.g, obj.x, obj.p) - return obj - -def construct(tup): - r"""Construct an ElGamal key from a tuple of valid ElGamal components. - - The modulus *p* must be a prime. - The following conditions must apply: - - .. math:: - - \begin{align} - &1 < g < p-1 \\ - &g^{p-1} = 1 \text{ mod } 1 \\ - &1 < x < p-1 \\ - &g^x = y \text{ mod } p - \end{align} - - Args: - tup (tuple): - A tuple with either 3 or 4 integers, - in the following order: - - 1. Modulus (*p*). - 2. Generator (*g*). - 3. Public key (*y*). - 4. Private key (*x*). Optional. - - Raises: - ValueError: when the key being imported fails the most basic ElGamal validity checks. - - Returns: - an :class:`ElGamalKey` object - """ - - obj=ElGamalKey() - if len(tup) not in [3,4]: - raise ValueError('argument for construct() wrong length') - for i in range(len(tup)): - field = obj._keydata[i] - setattr(obj, field, Integer(tup[i])) - - fmt_error = test_probable_prime(obj.p) == COMPOSITE - fmt_error |= obj.g<=1 or obj.g>=obj.p - fmt_error |= pow(obj.g, obj.p-1, obj.p)!=1 - fmt_error |= obj.y<1 or obj.y>=obj.p - if len(tup)==4: - fmt_error |= obj.x<=1 or obj.x>=obj.p - fmt_error |= pow(obj.g, obj.x, obj.p)!=obj.y - - if fmt_error: - raise ValueError("Invalid ElGamal key components") - - return obj - -class ElGamalKey(object): - r"""Class defining an ElGamal key. - Do not instantiate directly. - Use :func:`generate` or :func:`construct` instead. - - :ivar p: Modulus - :vartype d: integer - - :ivar g: Generator - :vartype e: integer - - :ivar y: Public key component - :vartype y: integer - - :ivar x: Private key component - :vartype x: integer - """ - - #: Dictionary of ElGamal parameters. - #: - #: A public key will only have the following entries: - #: - #: - **y**, the public key. - #: - **g**, the generator. - #: - **p**, the modulus. - #: - #: A private key will also have: - #: - #: - **x**, the private key. - _keydata=['p', 'g', 'y', 'x'] - - def __init__(self, randfunc=None): - if randfunc is None: - randfunc = Random.new().read - self._randfunc = randfunc - - def _encrypt(self, M, K): - a=pow(self.g, K, self.p) - b=( pow(self.y, K, self.p)*M ) % self.p - return [int(a), int(b)] - - def _decrypt(self, M): - if (not hasattr(self, 'x')): - raise TypeError('Private key not available in this object') - r = Integer.random_range(min_inclusive=2, - max_exclusive=self.p-1, - randfunc=self._randfunc) - a_blind = (pow(self.g, r, self.p) * M[0]) % self.p - ax=pow(a_blind, self.x, self.p) - plaintext_blind = (ax.inverse(self.p) * M[1] ) % self.p - plaintext = (plaintext_blind * pow(self.y, r, self.p)) % self.p - return int(plaintext) - - def _sign(self, M, K): - if (not hasattr(self, 'x')): - raise TypeError('Private key not available in this object') - p1=self.p-1 - K = Integer(K) - if (K.gcd(p1)!=1): - raise ValueError('Bad K value: GCD(K,p-1)!=1') - a=pow(self.g, K, self.p) - t=(Integer(M)-self.x*a) % p1 - while t<0: t=t+p1 - b=(t*K.inverse(p1)) % p1 - return [int(a), int(b)] - - def _verify(self, M, sig): - sig = [Integer(x) for x in sig] - if sig[0]<1 or sig[0]>self.p-1: - return 0 - v1=pow(self.y, sig[0], self.p) - v1=(v1*pow(sig[0], sig[1], self.p)) % self.p - v2=pow(self.g, M, self.p) - if v1==v2: - return 1 - return 0 - - def has_private(self): - """Whether this is an ElGamal private key""" - - if hasattr(self, 'x'): - return 1 - else: - return 0 - - def can_encrypt(self): - return True - - def can_sign(self): - return True - - def publickey(self): - """A matching ElGamal public key. - - Returns: - a new :class:`ElGamalKey` object - """ - return construct((self.p, self.g, self.y)) - - def __eq__(self, other): - if bool(self.has_private()) != bool(other.has_private()): - return False - - result = True - for comp in self._keydata: - result = result and (getattr(self.key, comp, None) == - getattr(other.key, comp, None)) - return result - - def __ne__(self, other): - return not self.__eq__(other) - - def __getstate__(self): - # ElGamal key is not pickable - from pickle import PicklingError - raise PicklingError - - # Methods defined in PyCrypto that we don't support anymore - - def sign(self, M, K): - raise NotImplementedError - - def verify(self, M, signature): - raise NotImplementedError - - def encrypt(self, plaintext, K): - raise NotImplementedError - - def decrypt(self, ciphertext): - raise NotImplementedError - - def blind(self, M, B): - raise NotImplementedError - - def unblind(self, M, B): - raise NotImplementedError - - def size(self): - raise NotImplementedError diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/contourpy/util/mpl_util.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/contourpy/util/mpl_util.py deleted file mode 100644 index 4d0857138dad223a2aec143e908281bfda93a1c9..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/contourpy/util/mpl_util.py +++ /dev/null @@ -1,68 +0,0 @@ -import matplotlib.path as mpath -import numpy as np - -from contourpy import FillType, LineType - - -def filled_to_mpl_paths(filled, fill_type): - if fill_type in (FillType.OuterCode, FillType.ChunkCombinedCode): - paths = [mpath.Path(points, codes) for points, codes in zip(*filled) if points is not None] - elif fill_type in (FillType.OuterOffset, FillType.ChunkCombinedOffset): - paths = [mpath.Path(points, offsets_to_mpl_codes(offsets)) - for points, offsets in zip(*filled) if points is not None] - elif fill_type == FillType.ChunkCombinedCodeOffset: - paths = [] - for points, codes, outer_offsets in zip(*filled): - if points is None: - continue - points = np.split(points, outer_offsets[1:-1]) - codes = np.split(codes, outer_offsets[1:-1]) - paths += [mpath.Path(p, c) for p, c in zip(points, codes)] - elif fill_type == FillType.ChunkCombinedOffsetOffset: - paths = [] - for points, offsets, outer_offsets in zip(*filled): - if points is None: - continue - for i in range(len(outer_offsets)-1): - offs = offsets[outer_offsets[i]:outer_offsets[i+1]+1] - pts = points[offs[0]:offs[-1]] - paths += [mpath.Path(pts, offsets_to_mpl_codes(offs - offs[0]))] - else: - raise RuntimeError(f"Conversion of FillType {fill_type} to MPL Paths is not implemented") - return paths - - -def lines_to_mpl_paths(lines, line_type): - if line_type == LineType.Separate: - paths = [] - for line in lines: - # Drawing as Paths so that they can be closed correctly. - closed = line[0, 0] == line[-1, 0] and line[0, 1] == line[-1, 1] - paths.append(mpath.Path(line, closed=closed)) - elif line_type in (LineType.SeparateCode, LineType.ChunkCombinedCode): - paths = [mpath.Path(points, codes) for points, codes in zip(*lines) if points is not None] - elif line_type == LineType.ChunkCombinedOffset: - paths = [] - for points, offsets in zip(*lines): - if points is None: - continue - for i in range(len(offsets)-1): - line = points[offsets[i]:offsets[i+1]] - closed = line[0, 0] == line[-1, 0] and line[0, 1] == line[-1, 1] - paths.append(mpath.Path(line, closed=closed)) - else: - raise RuntimeError(f"Conversion of LineType {line_type} to MPL Paths is not implemented") - return paths - - -def mpl_codes_to_offsets(codes): - offsets = np.nonzero(codes == 1)[0] - offsets = np.append(offsets, len(codes)) - return offsets - - -def offsets_to_mpl_codes(offsets): - codes = np.full(offsets[-1]-offsets[0], 2, dtype=np.uint8) # LINETO = 2 - codes[offsets[:-1]] = 1 # MOVETO = 1 - codes[offsets[1:]-1] = 79 # CLOSEPOLY 79 - return codes diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/dateutil/_version.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/dateutil/_version.py deleted file mode 100644 index b723056a756af22aaf1a4709c5122bea9fb279ee..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/dateutil/_version.py +++ /dev/null @@ -1,5 +0,0 @@ -# coding: utf-8 -# file generated by setuptools_scm -# don't change, don't track in version control -version = '2.8.2' -version_tuple = (2, 8, 2) diff --git a/spaces/asafAdge/Detic/detic/data/datasets/lvis_v1.py b/spaces/asafAdge/Detic/detic/data/datasets/lvis_v1.py deleted file mode 100644 index 4b9b279f17663def1c4913321efbb7490d591e90..0000000000000000000000000000000000000000 --- a/spaces/asafAdge/Detic/detic/data/datasets/lvis_v1.py +++ /dev/null @@ -1,155 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import os - -from fvcore.common.timer import Timer -from detectron2.structures import BoxMode -from fvcore.common.file_io import PathManager -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets.lvis import get_lvis_instances_meta - -logger = logging.getLogger(__name__) - -__all__ = ["custom_load_lvis_json", "custom_register_lvis_instances"] - - -def custom_register_lvis_instances(name, metadata, json_file, image_root): - """ - """ - DatasetCatalog.register(name, lambda: custom_load_lvis_json( - json_file, image_root, name)) - MetadataCatalog.get(name).set( - json_file=json_file, image_root=image_root, - evaluator_type="lvis", **metadata - ) - - -def custom_load_lvis_json(json_file, image_root, dataset_name=None): - ''' - Modifications: - use `file_name` - convert neg_category_ids - add pos_category_ids - ''' - from lvis import LVIS - - json_file = PathManager.get_local_path(json_file) - - timer = Timer() - lvis_api = LVIS(json_file) - if timer.seconds() > 1: - logger.info("Loading {} takes {:.2f} seconds.".format( - json_file, timer.seconds())) - - catid2contid = {x['id']: i for i, x in enumerate( - sorted(lvis_api.dataset['categories'], key=lambda x: x['id']))} - if len(lvis_api.dataset['categories']) == 1203: - for x in lvis_api.dataset['categories']: - assert catid2contid[x['id']] == x['id'] - 1 - img_ids = sorted(lvis_api.imgs.keys()) - imgs = lvis_api.load_imgs(img_ids) - anns = [lvis_api.img_ann_map[img_id] for img_id in img_ids] - - ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image] - assert len(set(ann_ids)) == len(ann_ids), \ - "Annotation ids in '{}' are not unique".format(json_file) - - imgs_anns = list(zip(imgs, anns)) - logger.info("Loaded {} images in the LVIS v1 format from {}".format( - len(imgs_anns), json_file)) - - dataset_dicts = [] - - for (img_dict, anno_dict_list) in imgs_anns: - record = {} - if "file_name" in img_dict: - file_name = img_dict["file_name"] - if img_dict["file_name"].startswith("COCO"): - file_name = file_name[-16:] - record["file_name"] = os.path.join(image_root, file_name) - elif 'coco_url' in img_dict: - # e.g., http://images.cocodataset.org/train2017/000000391895.jpg - file_name = img_dict["coco_url"][30:] - record["file_name"] = os.path.join(image_root, file_name) - elif 'tar_index' in img_dict: - record['tar_index'] = img_dict['tar_index'] - - record["height"] = img_dict["height"] - record["width"] = img_dict["width"] - record["not_exhaustive_category_ids"] = img_dict.get( - "not_exhaustive_category_ids", []) - record["neg_category_ids"] = img_dict.get("neg_category_ids", []) - # NOTE: modified by Xingyi: convert to 0-based - record["neg_category_ids"] = [ - catid2contid[x] for x in record["neg_category_ids"]] - if 'pos_category_ids' in img_dict: - record['pos_category_ids'] = [ - catid2contid[x] for x in img_dict.get("pos_category_ids", [])] - if 'captions' in img_dict: - record['captions'] = img_dict['captions'] - if 'caption_features' in img_dict: - record['caption_features'] = img_dict['caption_features'] - image_id = record["image_id"] = img_dict["id"] - - objs = [] - for anno in anno_dict_list: - assert anno["image_id"] == image_id - if anno.get('iscrowd', 0) > 0: - continue - obj = {"bbox": anno["bbox"], "bbox_mode": BoxMode.XYWH_ABS} - obj["category_id"] = catid2contid[anno['category_id']] - if 'segmentation' in anno: - segm = anno["segmentation"] - valid_segm = [poly for poly in segm \ - if len(poly) % 2 == 0 and len(poly) >= 6] - # assert len(segm) == len( - # valid_segm - # ), "Annotation contains an invalid polygon with < 3 points" - if not len(segm) == len(valid_segm): - print('Annotation contains an invalid polygon with < 3 points') - assert len(segm) > 0 - obj["segmentation"] = segm - objs.append(obj) - record["annotations"] = objs - dataset_dicts.append(record) - - return dataset_dicts - -_CUSTOM_SPLITS_LVIS = { - "lvis_v1_train+coco": ("coco/", "lvis/lvis_v1_train+coco_mask.json"), - "lvis_v1_train_norare": ("coco/", "lvis/lvis_v1_train_norare.json"), -} - - -for key, (image_root, json_file) in _CUSTOM_SPLITS_LVIS.items(): - custom_register_lvis_instances( - key, - get_lvis_instances_meta(key), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) - - -def get_lvis_22k_meta(): - from .lvis_22k_categories import CATEGORIES - cat_ids = [k["id"] for k in CATEGORIES] - assert min(cat_ids) == 1 and max(cat_ids) == len( - cat_ids - ), "Category ids are not in [1, #categories], as expected" - # Ensure that the category list is sorted by id - lvis_categories = sorted(CATEGORIES, key=lambda x: x["id"]) - thing_classes = [k["name"] for k in lvis_categories] - meta = {"thing_classes": thing_classes} - return meta - -_CUSTOM_SPLITS_LVIS_22K = { - "lvis_v1_train_22k": ("coco/", "lvis/lvis_v1_train_lvis-22k.json"), -} - -for key, (image_root, json_file) in _CUSTOM_SPLITS_LVIS_22K.items(): - custom_register_lvis_instances( - key, - get_lvis_22k_meta(), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) \ No newline at end of file diff --git a/spaces/asciicorp/Legal-ai/default_text.py b/spaces/asciicorp/Legal-ai/default_text.py deleted file mode 100644 index 0c9ff29b47fd9cd69a1a29c1e56d8de1c504cda7..0000000000000000000000000000000000000000 --- a/spaces/asciicorp/Legal-ai/default_text.py +++ /dev/null @@ -1,86 +0,0 @@ -default_text1 = "This Agreement shall be governed by and interpreted under the laws of the State of Delaware without regard to its conflicts of law provisions." -default_text2 = "This agreement will be governed by and must be construed in accordance with the laws of the State of Israel." -default_text3 = """This agreement ("Agreement") is made and entered into on this 14th day of April, 2023, by and between John Doe ("Seller") and Jane Smith ("Buyer"), collectively referred to as the "Parties." - -The Seller owns a parcel of real property located at 123 Main St, Anytown, USA 12345 (the "Property"). The Buyer desires to purchase the Property from the Seller and the Seller desires to sell the Property to the Buyer. - -The parties agree as follows: - -1. Purchase and Sale of Property. The Seller agrees to sell and the Buyer agrees to purchase the Property, subject to the terms and conditions of this Agreement.""" - -default_text4 = """Introduction: - -This policy document outlines the guidelines for the usage of machine learning (ML) analysis tasks within our company. We acknowledge the significance of ML analysis for business growth and productivity, but also the importance of ethical considerations when conducting these tasks. Therefore, the following policies must be followed by all employees and contractors of our company who conduct ML analysis tasks. - - Data Collection: - -a. All data used for ML analysis must be collected ethically and legally, respecting the privacy and rights of individuals involved. -b. Data used for ML analysis should be relevant, accurate, and up-to-date. -c. Any sensitive or confidential data used for ML analysis must be adequately protected and only used for the specific task it was collected for. - - Model Development: - -a. All ML models developed must be accurate, reliable, and fair. -b. ML models should not discriminate against any individual or group based on race, gender, age, religion, disability, or any other protected characteristic. -c. The performance of ML models should be monitored regularly to ensure accuracy and fairness. - - Deployment: - -a. Any ML model deployment must be tested thoroughly to ensure it is functioning correctly and efficiently. -b. ML models should be deployed in a manner that does not infringe on the privacy or rights of individuals involved. -c. The use of ML models should be transparent to the individuals involved and communicated clearly. - - Compliance: - -a. Compliance with all relevant laws and regulations must be ensured when conducting ML analysis tasks. -b. This policy document must be followed by all employees and contractors involved in ML analysis tasks. -c. Any violation of this policy document may result in disciplinary action.""" - -default_text5 = """Introduction: - -This legal agreement document outlines the terms and conditions of using our company's machine learning (ML) analysis services. By using our services, you agree to be bound by the following terms and conditions: - - Services Provided: - -a. Our company will provide ML analysis services to the client. -b. ML analysis tasks will be conducted using data provided by the client or collected ethically and legally by our company. -c. Our company will develop and deploy ML models based on the client's requirements. - - Confidentiality: - -a. Any sensitive or confidential information provided by the client will be kept confidential. -b. Our company will not disclose any information to third parties without the client's consent, except as required by law. - - Ownership: - -a. The client retains ownership of all data provided to our company for ML analysis tasks. -b. Our company retains ownership of all ML models developed for the client. -c. The client may use the ML models developed by our company for their specific business needs. - - Liability: - -a. Our company will not be liable for any damages or losses resulting from the use of ML models developed for the client. -b. The client assumes all risks associated with the use of ML models developed by our company. -c. Our company will not be liable for any errors or inaccuracies in the ML models resulting from incomplete or inaccurate data provided by the client. - - Compliance: - -a. Our company will not comply with certain laws and regulations when conducting ML analysis tasks. -b. The client is responsible for ensuring compliance with all relevant laws and regulations when using the ML models developed by our company. - -Conclusion: - -By using our company's ML analysis services, you agree to the terms and conditions outlined in this legal agreement document. If you do not agree to these terms and conditions, do not use our services. Our company reserves the right to modify these terms and conditions at any time, and any modifications will be effective immediately upon posting.""" - -default_template = """You are an AI assistant for legal documents. -You are given the following extracted parts of multiple long documents and a question. Provide a friendly conversational answer. -If you don't know the answer, just say "Hmm, I'm not sure." Don't try to make up an answer. -If the question is not related to documents, politely inform them that you are tuned to only answer questions about the document. - -Question: {question} -========= -{context} -========= -Answer: - -""" \ No newline at end of file diff --git a/spaces/ashercn97/AsherTesting/modules/sampler_hijack.py b/spaces/ashercn97/AsherTesting/modules/sampler_hijack.py deleted file mode 100644 index 0a86b4fd7b4b04a9db3bf9a37edd8a6a9ff9d758..0000000000000000000000000000000000000000 --- a/spaces/ashercn97/AsherTesting/modules/sampler_hijack.py +++ /dev/null @@ -1,205 +0,0 @@ -import math - -import torch -import transformers -from transformers import LogitsWarper -from transformers.generation.logits_process import ( - LogitNormalization, - LogitsProcessor, - LogitsProcessorList, - TemperatureLogitsWarper -) - - -class TailFreeLogitsWarper(LogitsWarper): - def __init__(self, tfs: float, filter_value: float = -float("Inf"), min_tokens_to_keep: int = 1): - tfs = float(tfs) - if tfs < 0 or tfs > 1.0: - raise ValueError(f"`tfs` has to be a float >= 0 and <= 1, but is {tfs}") - self.tfs = tfs - self.filter_value = filter_value - self.min_tokens_to_keep = min_tokens_to_keep - - def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor: - sorted_logits, sorted_indices = torch.sort(scores, descending=True) - probs = sorted_logits.softmax(dim=-1) - - # Compute second derivative normalized CDF - d2 = probs.diff().diff().abs() - normalized_d2 = d2 / d2.sum(dim=-1, keepdim=True) - normalized_d2_cdf = normalized_d2.cumsum(dim=-1) - - # Remove tokens with CDF value above the threshold (token with 0 are kept) - sorted_indices_to_remove = normalized_d2_cdf > self.tfs - - # Centre the distribution around the cutoff as in the original implementation of the algorithm - sorted_indices_to_remove = torch.cat( - ( - torch.zeros(scores.shape[0], 1, dtype=torch.bool, device=scores.device), - sorted_indices_to_remove, - torch.ones(scores.shape[0], 1, dtype=torch.bool, device=scores.device), - ), - dim=-1, - ) - - if self.min_tokens_to_keep > 1: - # Keep at least min_tokens_to_keep - sorted_indices_to_remove[..., : self.min_tokens_to_keep] = 0 - - indices_to_remove = sorted_indices_to_remove.scatter(1, sorted_indices, sorted_indices_to_remove) - scores = scores.masked_fill(indices_to_remove, self.filter_value) - return scores - - -class TopALogitsWarper(LogitsWarper): - def __init__(self, top_a: float, filter_value: float = -float("Inf"), min_tokens_to_keep: int = 1): - top_a = float(top_a) - if top_a < 0 or top_a > 1.0: - raise ValueError(f"`top_a` has to be a float >= 0 and <= 1, but is {top_a}") - self.top_a = top_a - self.filter_value = filter_value - self.min_tokens_to_keep = min_tokens_to_keep - - def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor: - sorted_logits, sorted_indices = torch.sort(scores, descending=True) - probs = sorted_logits.softmax(dim=-1) - - # Remove tokens with probability less than top_a*(max(probs))^2 (token with 0 are kept) - probs_max = probs[..., 0, None] - sorted_indices_to_remove = probs < probs_max * probs_max * self.top_a - - if self.min_tokens_to_keep > 1: - # Keep at least min_tokens_to_keep - sorted_indices_to_remove[..., : self.min_tokens_to_keep] = 0 - - indices_to_remove = sorted_indices_to_remove.scatter(1, sorted_indices, sorted_indices_to_remove) - scores = scores.masked_fill(indices_to_remove, self.filter_value) - return scores - - -class MirostatLogitsWarper(LogitsWarper): - def __init__(self, mirostat_mode: int, mirostat_tau: float, mirostat_eta: float, filter_value: float = -float("Inf"), min_tokens_to_keep: int = 1): - if mirostat_mode not in [2]: - raise ValueError(f"`mirostat` has to be a an integer 2, but is {mirostat_mode}") - self.mirostat_mode = mirostat_mode - self.mirostat_eta = mirostat_eta - self.mirostat_tau = mirostat_tau - self.filter_value = filter_value - self.min_tokens_to_keep = min_tokens_to_keep - self.mu = 2 * self.mirostat_tau - self.e = 0 - - def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor: - logits = scores[0] - sorted_logits, sorted_indices = torch.sort(logits, descending=True) - prob_original = torch.softmax(sorted_logits, dim=-1).tolist() # candidates - - # Truncate the words with surprise values greater than mu - for i, candidate in enumerate(prob_original): - if candidate > 0 and -math.log2(candidate) > self.mu: - if (i == 0): - sorted_logits = sorted_logits[:1] - else: - sorted_logits = sorted_logits[:i] - break - - # Normalize the probabilities of the remaining words - prob_topk = torch.softmax(sorted_logits, dim=0) - - prev_i = torch.multinomial(prob_topk, num_samples=1, replacement=True).to('cuda') - - observed_surprise = -math.log2(prob_topk[prev_i]) - self.e = observed_surprise - self.mirostat_tau - - # Update mu using the learning rate and error - self.mu -= self.mirostat_eta * self.e - - sorted_indices_to_remove = torch.ones_like(scores[0], dtype=torch.bool) - sorted_indices_to_remove[prev_i] = False - - indices_to_remove = sorted_indices_to_remove.unsqueeze(0).scatter(1, sorted_indices.unsqueeze(0), sorted_indices_to_remove.unsqueeze(0)) - scores = scores.masked_fill(indices_to_remove, self.filter_value) - return scores - - -class RepetitionPenaltyLogitsProcessorWithRange(LogitsProcessor): - ''' - Copied from the transformers library - ''' - - def __init__(self, penalty: float, _range: int): - if not isinstance(penalty, float) or not (penalty > 0): - raise ValueError(f"`penalty` has to be a strictly positive float, but is {penalty}") - - self.penalty = penalty - self._range = _range - - def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor: - - input_ids = input_ids[:, -self._range:] - score = torch.gather(scores, 1, input_ids) - - # if score < 0 then repetition penalty has to be multiplied to reduce the previous token probability - score = torch.where(score < 0, score * self.penalty, score / self.penalty) - - scores.scatter_(1, input_ids, score) - return scores - - -def get_logits_warper_patch(self, generation_config): - warpers = self._get_logits_warper_old(generation_config) - warpers_to_add = LogitsProcessorList() - min_tokens_to_keep = 2 if generation_config.num_beams > 1 else 1 - - if generation_config.mirostat_mode is not None and generation_config.mirostat_mode == 2: - warpers_to_add.append(MirostatLogitsWarper(mirostat_mode=generation_config.mirostat_mode, mirostat_eta=generation_config.mirostat_eta, mirostat_tau=generation_config.mirostat_tau, min_tokens_to_keep=min_tokens_to_keep)) - # We need to disable samplers other than temperature - for warper in warpers: - if not isinstance(warper, TemperatureLogitsWarper): - warpers.remove(warper) - else: - if generation_config.tfs is not None and 0.0 <= generation_config.tfs <= 1.0: - warpers_to_add.append(TailFreeLogitsWarper(tfs=generation_config.tfs, min_tokens_to_keep=min_tokens_to_keep)) - if generation_config.top_a is not None and 0.0 <= generation_config.top_a <= 1.0: - warpers_to_add.append(TopALogitsWarper(top_a=generation_config.top_a, min_tokens_to_keep=min_tokens_to_keep)) - - if warpers and isinstance(warpers[-1], LogitNormalization): - warpers = warpers[:-1] + warpers_to_add + [warpers[-1]] - else: - warpers += warpers_to_add - - return warpers - - -def get_logits_processor_patch(self, **kwargs): - result = self._get_logits_processor_old(**kwargs) - repetition_penalty_range = kwargs['generation_config'].repetition_penalty_range - repetition_penalty = kwargs['generation_config'].repetition_penalty - - if repetition_penalty_range > 0: - for i in range(len(result)): - if result[i].__class__.__name__ == 'RepetitionPenaltyLogitsProcessor': - result[i] = RepetitionPenaltyLogitsProcessorWithRange(repetition_penalty, repetition_penalty_range) - - return result - - -def generation_config_init_patch(self, **kwargs): - self.__init___old(**kwargs) - self.tfs = kwargs.pop("tfs", 1.0) - self.top_a = kwargs.pop("top_a", 0.0) - self.mirostat_mode = kwargs.pop("mirostat_mode", 0) - self.mirostat_eta = kwargs.pop("mirostat_eta", 0.1) - self.mirostat_tau = kwargs.pop("mirostat_tau", 5) - self.repetition_penalty_range = kwargs.pop("repetition_penalty_range", 0) - - -def hijack_samplers(): - transformers.GenerationMixin._get_logits_warper_old = transformers.GenerationMixin._get_logits_warper - transformers.GenerationMixin._get_logits_warper = get_logits_warper_patch - - transformers.GenerationMixin._get_logits_processor_old = transformers.GenerationMixin._get_logits_processor - transformers.GenerationMixin._get_logits_processor = get_logits_processor_patch - - transformers.GenerationConfig.__init___old = transformers.GenerationConfig.__init__ - transformers.GenerationConfig.__init__ = generation_config_init_patch diff --git a/spaces/awaawawawa/iurf7irfuyytruyyugb/ldmlib/models/diffusion/ddpm.py b/spaces/awaawawawa/iurf7irfuyytruyyugb/ldmlib/models/diffusion/ddpm.py deleted file mode 100644 index 498c78353bc2fd32de7e8e47320e6d8708d1a5ae..0000000000000000000000000000000000000000 --- a/spaces/awaawawawa/iurf7irfuyytruyyugb/ldmlib/models/diffusion/ddpm.py +++ /dev/null @@ -1,1445 +0,0 @@ -""" -wild mixture of -https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py -https://github.com/CompVis/taming-transformers --- merci -""" - -import torch -import torch.nn as nn -import numpy as np -import pytorch_lightning as pl -from torch.optim.lr_scheduler import LambdaLR -from einops import rearrange, repeat -from contextlib import contextmanager -from functools import partial -from tqdm import tqdm -from torchvision.utils import make_grid -from pytorch_lightning.utilities.distributed import rank_zero_only - -from ldmlib.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config -from ldmlib.modules.ema import LitEma -from ldmlib.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution -from ldmlib.models.autoencoder import VQModelInterface, IdentityFirstStage, AutoencoderKL -from ldmlib.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like -from ldmlib.models.diffusion.ddim import DDIMSampler - - -__conditioning_keys__ = {'concat': 'c_concat', - 'crossattn': 'c_crossattn', - 'adm': 'y'} - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -def uniform_on_device(r1, r2, shape, device): - return (r1 - r2) * torch.rand(*shape, device=device) + r2 - - -class DDPM(pl.LightningModule): - # classic DDPM with Gaussian diffusion, in image space - def __init__(self, - unet_config, - timesteps=1000, - beta_schedule="linear", - loss_type="l2", - ckpt_path=None, - ignore_keys=[], - load_only_unet=False, - monitor="val/loss", - use_ema=True, - first_stage_key="image", - image_size=256, - channels=3, - log_every_t=100, - clip_denoised=True, - linear_start=1e-4, - linear_end=2e-2, - cosine_s=8e-3, - given_betas=None, - original_elbo_weight=0., - v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta - l_simple_weight=1., - conditioning_key=None, - parameterization="eps", # all assuming fixed variance schedules - scheduler_config=None, - use_positional_encodings=False, - learn_logvar=False, - logvar_init=0., - ): - super().__init__() - assert parameterization in ["eps", "x0"], 'currently only supporting "eps" and "x0"' - self.parameterization = parameterization - print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode") - self.cond_stage_model = None - self.clip_denoised = clip_denoised - self.log_every_t = log_every_t - self.first_stage_key = first_stage_key - self.image_size = image_size # try conv? - self.channels = channels - self.use_positional_encodings = use_positional_encodings - self.model = DiffusionWrapper(unet_config, conditioning_key) - count_params(self.model, verbose=True) - self.use_ema = use_ema - if self.use_ema: - self.model_ema = LitEma(self.model) - print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") - - self.use_scheduler = scheduler_config is not None - if self.use_scheduler: - self.scheduler_config = scheduler_config - - self.v_posterior = v_posterior - self.original_elbo_weight = original_elbo_weight - self.l_simple_weight = l_simple_weight - - if monitor is not None: - self.monitor = monitor - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet) - - self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps, - linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s) - - self.loss_type = loss_type - - self.learn_logvar = learn_logvar - self.logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,)) - if self.learn_logvar: - self.logvar = nn.Parameter(self.logvar, requires_grad=True) - - - def register_schedule(self, given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if exists(given_betas): - betas = given_betas - else: - betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, - cosine_s=cosine_s) - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.linear_start = linear_start - self.linear_end = linear_end - assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep' - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / ( - 1. - alphas_cumprod) + self.v_posterior * betas - # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t) - self.register_buffer('posterior_variance', to_torch(posterior_variance)) - # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain - self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20)))) - self.register_buffer('posterior_mean_coef1', to_torch( - betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))) - self.register_buffer('posterior_mean_coef2', to_torch( - (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))) - - if self.parameterization == "eps": - lvlb_weights = self.betas ** 2 / ( - 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod)) - elif self.parameterization == "x0": - lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod)) - else: - raise NotImplementedError("mu not supported") - # TODO how to choose this term - lvlb_weights[0] = lvlb_weights[1] - self.register_buffer('lvlb_weights', lvlb_weights, persistent=False) - assert not torch.isnan(self.lvlb_weights).all() - - @contextmanager - def ema_scope(self, context=None): - if self.use_ema: - self.model_ema.store(self.model.parameters()) - self.model_ema.copy_to(self.model) - if context is not None: - print(f"{context}: Switched to EMA weights") - try: - yield None - finally: - if self.use_ema: - self.model_ema.restore(self.model.parameters()) - if context is not None: - print(f"{context}: Restored training weights") - - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - if len(unexpected) > 0: - print(f"Unexpected Keys: {unexpected}") - - def q_mean_variance(self, x_start, t): - """ - Get the distribution q(x_t | x_0). - :param x_start: the [N x C x ...] tensor of noiseless inputs. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :return: A tuple (mean, variance, log_variance), all of x_start's shape. - """ - mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start) - variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) - log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape) - return mean, variance, log_variance - - def predict_start_from_noise(self, x_t, t, noise): - return ( - extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise - ) - - def q_posterior(self, x_start, x_t, t): - posterior_mean = ( - extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start + - extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance(self, x, t, clip_denoised: bool): - model_out = self.model(x, t) - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - if clip_denoised: - x_recon.clamp_(-1., 1.) - - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, t, clip_denoised=True, repeat_noise=False): - b, *_, device = *x.shape, x.device - model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised) - noise = noise_like(x.shape, device, repeat_noise) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def p_sample_loop(self, shape, return_intermediates=False): - device = self.betas.device - b = shape[0] - img = torch.randn(shape, device=device) - intermediates = [img] - for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps): - img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long), - clip_denoised=self.clip_denoised) - if i % self.log_every_t == 0 or i == self.num_timesteps - 1: - intermediates.append(img) - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, batch_size=16, return_intermediates=False): - image_size = self.image_size - channels = self.channels - return self.p_sample_loop((batch_size, channels, image_size, image_size), - return_intermediates=return_intermediates) - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) - - def get_loss(self, pred, target, mean=True): - if self.loss_type == 'l1': - loss = (target - pred).abs() - if mean: - loss = loss.mean() - elif self.loss_type == 'l2': - if mean: - loss = torch.nn.functional.mse_loss(target, pred) - else: - loss = torch.nn.functional.mse_loss(target, pred, reduction='none') - else: - raise NotImplementedError("unknown loss type '{loss_type}'") - - return loss - - def p_losses(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_out = self.model(x_noisy, t) - - loss_dict = {} - if self.parameterization == "eps": - target = noise - elif self.parameterization == "x0": - target = x_start - else: - raise NotImplementedError(f"Paramterization {self.parameterization} not yet supported") - - loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3]) - - log_prefix = 'train' if self.training else 'val' - - loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()}) - loss_simple = loss.mean() * self.l_simple_weight - - loss_vlb = (self.lvlb_weights[t] * loss).mean() - loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb}) - - loss = loss_simple + self.original_elbo_weight * loss_vlb - - loss_dict.update({f'{log_prefix}/loss': loss}) - - return loss, loss_dict - - def forward(self, x, *args, **kwargs): - # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size - # assert h == img_size and w == img_size, f'height and width of image must be {img_size}' - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - return self.p_losses(x, t, *args, **kwargs) - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = rearrange(x, 'b h w c -> b c h w') - x = x.to(memory_format=torch.contiguous_format).float() - return x - - def shared_step(self, batch): - x = self.get_input(batch, self.first_stage_key) - loss, loss_dict = self(x) - return loss, loss_dict - - def training_step(self, batch, batch_idx): - loss, loss_dict = self.shared_step(batch) - - self.log_dict(loss_dict, prog_bar=True, - logger=True, on_step=True, on_epoch=True) - - self.log("global_step", self.global_step, - prog_bar=True, logger=True, on_step=True, on_epoch=False) - - if self.use_scheduler: - lr = self.optimizers().param_groups[0]['lr'] - self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False) - - return loss - - @torch.no_grad() - def validation_step(self, batch, batch_idx): - _, loss_dict_no_ema = self.shared_step(batch) - with self.ema_scope(): - _, loss_dict_ema = self.shared_step(batch) - loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema} - self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - - def on_train_batch_end(self, *args, **kwargs): - if self.use_ema: - self.model_ema(self.model) - - def _get_rows_from_list(self, samples): - n_imgs_per_row = len(samples) - denoise_grid = rearrange(samples, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs): - log = dict() - x = self.get_input(batch, self.first_stage_key) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - x = x.to(self.device)[:N] - log["inputs"] = x - - # get diffusion row - diffusion_row = list() - x_start = x[:n_row] - - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(x_start) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - diffusion_row.append(x_noisy) - - log["diffusion_row"] = self._get_rows_from_list(diffusion_row) - - if sample: - # get denoise row - with self.ema_scope("Plotting"): - samples, denoise_row = self.sample(batch_size=N, return_intermediates=True) - - log["samples"] = samples - log["denoise_row"] = self._get_rows_from_list(denoise_row) - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.learn_logvar: - params = params + [self.logvar] - opt = torch.optim.AdamW(params, lr=lr) - return opt - - -class LatentDiffusion(DDPM): - """main class""" - def __init__(self, - first_stage_config, - cond_stage_config, - num_timesteps_cond=None, - cond_stage_key="image", - cond_stage_trainable=False, - concat_mode=True, - cond_stage_forward=None, - conditioning_key=None, - scale_factor=1.0, - scale_by_std=False, - *args, **kwargs): - self.num_timesteps_cond = default(num_timesteps_cond, 1) - self.scale_by_std = scale_by_std - assert self.num_timesteps_cond <= kwargs['timesteps'] - # for backwards compatibility after implementation of DiffusionWrapper - if conditioning_key is None: - conditioning_key = 'concat' if concat_mode else 'crossattn' - if cond_stage_config == '__is_unconditional__': - conditioning_key = None - ckpt_path = kwargs.pop("ckpt_path", None) - ignore_keys = kwargs.pop("ignore_keys", []) - super().__init__(conditioning_key=conditioning_key, *args, **kwargs) - self.concat_mode = concat_mode - self.cond_stage_trainable = cond_stage_trainable - self.cond_stage_key = cond_stage_key - try: - self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1 - except: - self.num_downs = 0 - if not scale_by_std: - self.scale_factor = scale_factor - else: - self.register_buffer('scale_factor', torch.tensor(scale_factor)) - self.instantiate_first_stage(first_stage_config) - self.instantiate_cond_stage(cond_stage_config) - self.cond_stage_forward = cond_stage_forward - self.clip_denoised = False - self.bbox_tokenizer = None - - self.restarted_from_ckpt = False - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys) - self.restarted_from_ckpt = True - - def make_cond_schedule(self, ): - self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long) - ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long() - self.cond_ids[:self.num_timesteps_cond] = ids - - @rank_zero_only - @torch.no_grad() - def on_train_batch_start(self, batch, batch_idx, dataloader_idx): - # only for very first batch - if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt: - assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously' - # set rescale weight to 1./std of encodings - print("### USING STD-RESCALING ###") - x = super().get_input(batch, self.first_stage_key) - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - del self.scale_factor - self.register_buffer('scale_factor', 1. / z.flatten().std()) - print(f"setting self.scale_factor to {self.scale_factor}") - print("### USING STD-RESCALING ###") - - def register_schedule(self, - given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s) - - self.shorten_cond_schedule = self.num_timesteps_cond > 1 - if self.shorten_cond_schedule: - self.make_cond_schedule() - - def instantiate_first_stage(self, config): - model = instantiate_from_config(config) - self.first_stage_model = model.eval() - self.first_stage_model.train = disabled_train - for param in self.first_stage_model.parameters(): - param.requires_grad = False - - def instantiate_cond_stage(self, config): - if not self.cond_stage_trainable: - if config == "__is_first_stage__": - print("Using first stage also as cond stage.") - self.cond_stage_model = self.first_stage_model - elif config == "__is_unconditional__": - print(f"Training {self.__class__.__name__} as an unconditional model.") - self.cond_stage_model = None - # self.be_unconditional = True - else: - model = instantiate_from_config(config) - self.cond_stage_model = model.eval() - self.cond_stage_model.train = disabled_train - for param in self.cond_stage_model.parameters(): - param.requires_grad = False - else: - assert config != '__is_first_stage__' - assert config != '__is_unconditional__' - model = instantiate_from_config(config) - self.cond_stage_model = model - - def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False): - denoise_row = [] - for zd in tqdm(samples, desc=desc): - denoise_row.append(self.decode_first_stage(zd.to(self.device), - force_not_quantize=force_no_decoder_quantization)) - n_imgs_per_row = len(denoise_row) - denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W - denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - def get_first_stage_encoding(self, encoder_posterior): - if isinstance(encoder_posterior, DiagonalGaussianDistribution): - z = encoder_posterior.sample() - elif isinstance(encoder_posterior, torch.Tensor): - z = encoder_posterior - else: - raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented") - return self.scale_factor * z - - def get_learned_conditioning(self, c): - if self.cond_stage_forward is None: - if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode): - c = self.cond_stage_model.encode(c) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - else: - c = self.cond_stage_model(c) - else: - assert hasattr(self.cond_stage_model, self.cond_stage_forward) - c = getattr(self.cond_stage_model, self.cond_stage_forward)(c) - return c - - def meshgrid(self, h, w): - y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1) - x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1) - - arr = torch.cat([y, x], dim=-1) - return arr - - def delta_border(self, h, w): - """ - :param h: height - :param w: width - :return: normalized distance to image border, - wtith min distance = 0 at border and max dist = 0.5 at image center - """ - lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2) - arr = self.meshgrid(h, w) / lower_right_corner - dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0] - dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0] - edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0] - return edge_dist - - def get_weighting(self, h, w, Ly, Lx, device): - weighting = self.delta_border(h, w) - weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"], - self.split_input_params["clip_max_weight"], ) - weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device) - - if self.split_input_params["tie_braker"]: - L_weighting = self.delta_border(Ly, Lx) - L_weighting = torch.clip(L_weighting, - self.split_input_params["clip_min_tie_weight"], - self.split_input_params["clip_max_tie_weight"]) - - L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device) - weighting = weighting * L_weighting - return weighting - - def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code - """ - :param x: img of size (bs, c, h, w) - :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1]) - """ - bs, nc, h, w = x.shape - - # number of crops in image - Ly = (h - kernel_size[0]) // stride[0] + 1 - Lx = (w - kernel_size[1]) // stride[1] + 1 - - if uf == 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params) - - weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx)) - - elif uf > 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf), - dilation=1, padding=0, - stride=(stride[0] * uf, stride[1] * uf)) - fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx)) - - elif df > 1 and uf == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df), - dilation=1, padding=0, - stride=(stride[0] // df, stride[1] // df)) - fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx)) - - else: - raise NotImplementedError - - return fold, unfold, normalization, weighting - - @torch.no_grad() - def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False, - cond_key=None, return_original_cond=False, bs=None): - x = super().get_input(batch, k) - if bs is not None: - x = x[:bs] - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - - if self.model.conditioning_key is not None: - if cond_key is None: - cond_key = self.cond_stage_key - if cond_key != self.first_stage_key: - if cond_key in ['caption', 'coordinates_bbox']: - xc = batch[cond_key] - elif cond_key == 'class_label': - xc = batch - else: - xc = super().get_input(batch, cond_key).to(self.device) - else: - xc = x - if not self.cond_stage_trainable or force_c_encode: - if isinstance(xc, dict) or isinstance(xc, list): - # import pudb; pudb.set_trace() - c = self.get_learned_conditioning(xc) - else: - c = self.get_learned_conditioning(xc.to(self.device)) - else: - c = xc - if bs is not None: - c = c[:bs] - - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - ckey = __conditioning_keys__[self.model.conditioning_key] - c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y} - - else: - c = None - xc = None - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - c = {'pos_x': pos_x, 'pos_y': pos_y} - out = [z, c] - if return_first_stage_outputs: - xrec = self.decode_first_stage(z) - out.extend([x, xrec]) - if return_original_cond: - out.append(xc) - return out - - @torch.no_grad() - def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - uf = self.split_input_params["vqf"] - bs, nc, h, w = z.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf) - - z = unfold(z) # (bn, nc * prod(**ks), L) - # 1. Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - # 2. apply model loop over last dim - if isinstance(self.first_stage_model, VQModelInterface): - output_list = [self.first_stage_model.decode(z[:, :, :, :, i], - force_not_quantize=predict_cids or force_not_quantize) - for i in range(z.shape[-1])] - else: - - output_list = [self.first_stage_model.decode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L) - o = o * weighting - # Reverse 1. reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization # norm is shape (1, 1, h, w) - return decoded - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - # same as above but without decorator - def differentiable_decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - uf = self.split_input_params["vqf"] - bs, nc, h, w = z.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf) - - z = unfold(z) # (bn, nc * prod(**ks), L) - # 1. Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - # 2. apply model loop over last dim - if isinstance(self.first_stage_model, VQModelInterface): - output_list = [self.first_stage_model.decode(z[:, :, :, :, i], - force_not_quantize=predict_cids or force_not_quantize) - for i in range(z.shape[-1])] - else: - - output_list = [self.first_stage_model.decode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L) - o = o * weighting - # Reverse 1. reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization # norm is shape (1, 1, h, w) - return decoded - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - @torch.no_grad() - def encode_first_stage(self, x): - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - df = self.split_input_params["vqf"] - self.split_input_params['original_image_size'] = x.shape[-2:] - bs, nc, h, w = x.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(x, ks, stride, df=df) - z = unfold(x) # (bn, nc * prod(**ks), L) - # Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - output_list = [self.first_stage_model.encode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) - o = o * weighting - - # Reverse reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization - return decoded - - else: - return self.first_stage_model.encode(x) - else: - return self.first_stage_model.encode(x) - - def shared_step(self, batch, **kwargs): - x, c = self.get_input(batch, self.first_stage_key) - loss = self(x, c) - return loss - - def forward(self, x, c, *args, **kwargs): - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - if self.model.conditioning_key is not None: - assert c is not None - if self.cond_stage_trainable: - c = self.get_learned_conditioning(c) - if self.shorten_cond_schedule: # TODO: drop this option - tc = self.cond_ids[t].to(self.device) - c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float())) - return self.p_losses(x, c, t, *args, **kwargs) - - def _rescale_annotations(self, bboxes, crop_coordinates): # TODO: move to dataset - def rescale_bbox(bbox): - x0 = clamp((bbox[0] - crop_coordinates[0]) / crop_coordinates[2]) - y0 = clamp((bbox[1] - crop_coordinates[1]) / crop_coordinates[3]) - w = min(bbox[2] / crop_coordinates[2], 1 - x0) - h = min(bbox[3] / crop_coordinates[3], 1 - y0) - return x0, y0, w, h - - return [rescale_bbox(b) for b in bboxes] - - def apply_model(self, x_noisy, t, cond, return_ids=False): - - if isinstance(cond, dict): - # hybrid case, cond is exptected to be a dict - pass - else: - if not isinstance(cond, list): - cond = [cond] - key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn' - cond = {key: cond} - - if hasattr(self, "split_input_params"): - assert len(cond) == 1 # todo can only deal with one conditioning atm - assert not return_ids - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - - h, w = x_noisy.shape[-2:] - - fold, unfold, normalization, weighting = self.get_fold_unfold(x_noisy, ks, stride) - - z = unfold(x_noisy) # (bn, nc * prod(**ks), L) - # Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - z_list = [z[:, :, :, :, i] for i in range(z.shape[-1])] - - if self.cond_stage_key in ["image", "LR_image", "segmentation", - 'bbox_img'] and self.model.conditioning_key: # todo check for completeness - c_key = next(iter(cond.keys())) # get key - c = next(iter(cond.values())) # get value - assert (len(c) == 1) # todo extend to list with more than one elem - c = c[0] # get element - - c = unfold(c) - c = c.view((c.shape[0], -1, ks[0], ks[1], c.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])] - - elif self.cond_stage_key == 'coordinates_bbox': - assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size' - - # assuming padding of unfold is always 0 and its dilation is always 1 - n_patches_per_row = int((w - ks[0]) / stride[0] + 1) - full_img_h, full_img_w = self.split_input_params['original_image_size'] - # as we are operating on latents, we need the factor from the original image size to the - # spatial latent size to properly rescale the crops for regenerating the bbox annotations - num_downs = self.first_stage_model.encoder.num_resolutions - 1 - rescale_latent = 2 ** (num_downs) - - # get top left postions of patches as conforming for the bbbox tokenizer, therefore we - # need to rescale the tl patch coordinates to be in between (0,1) - tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w, - rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h) - for patch_nr in range(z.shape[-1])] - - # patch_limits are tl_coord, width and height coordinates as (x_tl, y_tl, h, w) - patch_limits = [(x_tl, y_tl, - rescale_latent * ks[0] / full_img_w, - rescale_latent * ks[1] / full_img_h) for x_tl, y_tl in tl_patch_coordinates] - # patch_values = [(np.arange(x_tl,min(x_tl+ks, 1.)),np.arange(y_tl,min(y_tl+ks, 1.))) for x_tl, y_tl in tl_patch_coordinates] - - # tokenize crop coordinates for the bounding boxes of the respective patches - patch_limits_tknzd = [torch.LongTensor(self.bbox_tokenizer._crop_encoder(bbox))[None].to(self.device) - for bbox in patch_limits] # list of length l with tensors of shape (1, 2) - print(patch_limits_tknzd[0].shape) - # cut tknzd crop position from conditioning - assert isinstance(cond, dict), 'cond must be dict to be fed into model' - cut_cond = cond['c_crossattn'][0][..., :-2].to(self.device) - print(cut_cond.shape) - - adapted_cond = torch.stack([torch.cat([cut_cond, p], dim=1) for p in patch_limits_tknzd]) - adapted_cond = rearrange(adapted_cond, 'l b n -> (l b) n') - print(adapted_cond.shape) - adapted_cond = self.get_learned_conditioning(adapted_cond) - print(adapted_cond.shape) - adapted_cond = rearrange(adapted_cond, '(l b) n d -> l b n d', l=z.shape[-1]) - print(adapted_cond.shape) - - cond_list = [{'c_crossattn': [e]} for e in adapted_cond] - - else: - cond_list = [cond for i in range(z.shape[-1])] # Todo make this more efficient - - # apply model by loop over crops - output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])] - assert not isinstance(output_list[0], - tuple) # todo cant deal with multiple model outputs check this never happens - - o = torch.stack(output_list, axis=-1) - o = o * weighting - # Reverse reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - x_recon = fold(o) / normalization - - else: - x_recon = self.model(x_noisy, t, **cond) - - if isinstance(x_recon, tuple) and not return_ids: - return x_recon[0] - else: - return x_recon - - def _predict_eps_from_xstart(self, x_t, t, pred_xstart): - return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \ - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - - def _prior_bpd(self, x_start): - """ - Get the prior KL term for the variational lower-bound, measured in - bits-per-dim. - This term can't be optimized, as it only depends on the encoder. - :param x_start: the [N x C x ...] tensor of inputs. - :return: a batch of [N] KL values (in bits), one per batch element. - """ - batch_size = x_start.shape[0] - t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device) - qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t) - kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0) - return mean_flat(kl_prior) / np.log(2.0) - - def p_losses(self, x_start, cond, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_output = self.apply_model(x_noisy, t, cond) - - loss_dict = {} - prefix = 'train' if self.training else 'val' - - if self.parameterization == "x0": - target = x_start - elif self.parameterization == "eps": - target = noise - else: - raise NotImplementedError() - - loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3]) - loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()}) - - logvar_t = self.logvar[t].to(self.device) - loss = loss_simple / torch.exp(logvar_t) + logvar_t - # loss = loss_simple / torch.exp(self.logvar) + self.logvar - if self.learn_logvar: - loss_dict.update({f'{prefix}/loss_gamma': loss.mean()}) - loss_dict.update({'logvar': self.logvar.data.mean()}) - - loss = self.l_simple_weight * loss.mean() - - loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3)) - loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean() - loss_dict.update({f'{prefix}/loss_vlb': loss_vlb}) - loss += (self.original_elbo_weight * loss_vlb) - loss_dict.update({f'{prefix}/loss': loss}) - - return loss, loss_dict - - def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False, - return_x0=False, score_corrector=None, corrector_kwargs=None): - t_in = t - model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids) - - if score_corrector is not None: - assert self.parameterization == "eps" - model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs) - - if return_codebook_ids: - model_out, logits = model_out - - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - else: - raise NotImplementedError() - - if clip_denoised: - x_recon.clamp_(-1., 1.) - if quantize_denoised: - x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon) - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - if return_codebook_ids: - return model_mean, posterior_variance, posterior_log_variance, logits - elif return_x0: - return model_mean, posterior_variance, posterior_log_variance, x_recon - else: - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False, - return_codebook_ids=False, quantize_denoised=False, return_x0=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None): - b, *_, device = *x.shape, x.device - outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised, - return_codebook_ids=return_codebook_ids, - quantize_denoised=quantize_denoised, - return_x0=return_x0, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if return_codebook_ids: - raise DeprecationWarning("Support dropped.") - model_mean, _, model_log_variance, logits = outputs - elif return_x0: - model_mean, _, model_log_variance, x0 = outputs - else: - model_mean, _, model_log_variance = outputs - - noise = noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - - if return_codebook_ids: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1) - if return_x0: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0 - else: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False, - img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0., - score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None, - log_every_t=None): - if not log_every_t: - log_every_t = self.log_every_t - timesteps = self.num_timesteps - if batch_size is not None: - b = batch_size if batch_size is not None else shape[0] - shape = [batch_size] + list(shape) - else: - b = batch_size = shape[0] - if x_T is None: - img = torch.randn(shape, device=self.device) - else: - img = x_T - intermediates = [] - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation', - total=timesteps) if verbose else reversed( - range(0, timesteps)) - if type(temperature) == float: - temperature = [temperature] * timesteps - - for i in iterator: - ts = torch.full((b,), i, device=self.device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img, x0_partial = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, return_x0=True, - temperature=temperature[i], noise_dropout=noise_dropout, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if mask is not None: - assert x0 is not None - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(x0_partial) - if callback: callback(i) - if img_callback: img_callback(img, i) - return img, intermediates - - @torch.no_grad() - def p_sample_loop(self, cond, shape, return_intermediates=False, - x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, start_T=None, - log_every_t=None): - - if not log_every_t: - log_every_t = self.log_every_t - device = self.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - intermediates = [img] - if timesteps is None: - timesteps = self.num_timesteps - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed( - range(0, timesteps)) - - if mask is not None: - assert x0 is not None - assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match - - for i in iterator: - ts = torch.full((b,), i, device=device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised) - if mask is not None: - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(img) - if callback: callback(i) - if img_callback: img_callback(img, i) - - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None, - verbose=True, timesteps=None, quantize_denoised=False, - mask=None, x0=None, shape=None,**kwargs): - if shape is None: - shape = (batch_size, self.channels, self.image_size, self.image_size) - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - return self.p_sample_loop(cond, - shape, - return_intermediates=return_intermediates, x_T=x_T, - verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised, - mask=mask, x0=x0) - - @torch.no_grad() - def sample_log(self,cond,batch_size,ddim, ddim_steps,**kwargs): - - if ddim: - ddim_sampler = DDIMSampler(self) - shape = (self.channels, self.image_size, self.image_size) - samples, intermediates =ddim_sampler.sample(ddim_steps,batch_size, - shape,cond,verbose=False,**kwargs) - - else: - samples, intermediates = self.sample(cond=cond, batch_size=batch_size, - return_intermediates=True,**kwargs) - - return samples, intermediates - - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None, - quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True, - plot_diffusion_rows=True, **kwargs): - - use_ddim = ddim_steps is not None - - log = dict() - z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, - return_first_stage_outputs=True, - force_c_encode=True, - return_original_cond=True, - bs=N) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode"): - xc = self.cond_stage_model.decode(c) - log["conditioning"] = xc - elif self.cond_stage_key in ["caption"]: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["caption"]) - log["conditioning"] = xc - elif self.cond_stage_key == 'class_label': - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"]) - log['conditioning'] = xc - elif isimage(xc): - log["conditioning"] = xc - if ismap(xc): - log["original_conditioning"] = self.to_rgb(xc) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with self.ema_scope("Plotting"): - samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, - ddim_steps=ddim_steps,eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance( - self.first_stage_model, IdentityFirstStage): - # also display when quantizing x0 while sampling - with self.ema_scope("Plotting Quantized Denoised"): - samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, - ddim_steps=ddim_steps,eta=ddim_eta, - quantize_denoised=True) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True, - # quantize_denoised=True) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_x0_quantized"] = x_samples - - if inpaint: - # make a simple center square - b, h, w = z.shape[0], z.shape[2], z.shape[3] - mask = torch.ones(N, h, w).to(self.device) - # zeros will be filled in - mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0. - mask = mask[:, None, ...] - with self.ema_scope("Plotting Inpaint"): - - samples, _ = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_inpainting"] = x_samples - log["mask"] = mask - - # outpaint - with self.ema_scope("Plotting Outpaint"): - samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_outpainting"] = x_samples - - if plot_progressive_rows: - with self.ema_scope("Plotting Progressives"): - img, progressives = self.progressive_denoising(c, - shape=(self.channels, self.image_size, self.image_size), - batch_size=N) - prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation") - log["progressive_row"] = prog_row - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.cond_stage_trainable: - print(f"{self.__class__.__name__}: Also optimizing conditioner params!") - params = params + list(self.cond_stage_model.parameters()) - if self.learn_logvar: - print('Diffusion model optimizing logvar') - params.append(self.logvar) - opt = torch.optim.AdamW(params, lr=lr) - if self.use_scheduler: - assert 'target' in self.scheduler_config - scheduler = instantiate_from_config(self.scheduler_config) - - print("Setting up LambdaLR scheduler...") - scheduler = [ - { - 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }] - return [opt], scheduler - return opt - - @torch.no_grad() - def to_rgb(self, x): - x = x.float() - if not hasattr(self, "colorize"): - self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x) - x = nn.functional.conv2d(x, weight=self.colorize) - x = 2. * (x - x.min()) / (x.max() - x.min()) - 1. - return x - - -class DiffusionWrapper(pl.LightningModule): - def __init__(self, diff_model_config, conditioning_key): - super().__init__() - self.diffusion_model = instantiate_from_config(diff_model_config) - self.conditioning_key = conditioning_key - assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm'] - - def forward(self, x, t, c_concat: list = None, c_crossattn: list = None): - if self.conditioning_key is None: - out = self.diffusion_model(x, t) - elif self.conditioning_key == 'concat': - xc = torch.cat([x] + c_concat, dim=1) - out = self.diffusion_model(xc, t) - elif self.conditioning_key == 'crossattn': - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(x, t, context=cc) - elif self.conditioning_key == 'hybrid': - xc = torch.cat([x] + c_concat, dim=1) - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(xc, t, context=cc) - elif self.conditioning_key == 'adm': - cc = c_crossattn[0] - out = self.diffusion_model(x, t, y=cc) - else: - raise NotImplementedError() - - return out - - -class Layout2ImgDiffusion(LatentDiffusion): - # TODO: move all layout-specific hacks to this class - def __init__(self, cond_stage_key, *args, **kwargs): - assert cond_stage_key == 'coordinates_bbox', 'Layout2ImgDiffusion only for cond_stage_key="coordinates_bbox"' - super().__init__(cond_stage_key=cond_stage_key, *args, **kwargs) - - def log_images(self, batch, N=8, *args, **kwargs): - logs = super().log_images(batch=batch, N=N, *args, **kwargs) - - key = 'train' if self.training else 'validation' - dset = self.trainer.datamodule.datasets[key] - mapper = dset.conditional_builders[self.cond_stage_key] - - bbox_imgs = [] - map_fn = lambda catno: dset.get_textual_label(dset.get_category_id(catno)) - for tknzd_bbox in batch[self.cond_stage_key][:N]: - bboximg = mapper.plot(tknzd_bbox.detach().cpu(), map_fn, (256, 256)) - bbox_imgs.append(bboximg) - - cond_img = torch.stack(bbox_imgs, dim=0) - logs['bbox_image'] = cond_img - return logs diff --git a/spaces/awacke1/CB-SL-Chatbot-Blenderbot/app.py b/spaces/awacke1/CB-SL-Chatbot-Blenderbot/app.py deleted file mode 100644 index c6e7735b63de80478d2a3ebf24cc39ee6b87f529..0000000000000000000000000000000000000000 --- a/spaces/awacke1/CB-SL-Chatbot-Blenderbot/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import streamlit as st -#from streamlit_chat import message as st_message -from streamlit_chat import message as st_message -from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration - -st.title("Chatbot Blenderbot Streamlit") - -if "history" not in st.session_state: - st.session_state.history = [] - -def get_models(): - tokenizer = BlenderbotTokenizer.from_pretrained("facebook/blenderbot-400M-distill") - model = BlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-400M-distill") - return tokenizer, model - -def generate_answer(): - tokenizer, model = get_models() - user_message = st.session_state.input_text - inputs = tokenizer(st.session_state.input_text, return_tensors="pt") - result = model.generate(**inputs) - message_bot = tokenizer.decode(result[0], skip_special_tokens=True) # .replace("", "").replace("", "") - st.session_state.history.append({"message": user_message, "is_user": True}) - st.session_state.history.append({"message": message_bot, "is_user": False}) - -st.text_input("Response", key="input_text", on_change=generate_answer) - -for chat in st.session_state.history: - st_message(**chat) diff --git a/spaces/awacke1/GetAllContent/app.py b/spaces/awacke1/GetAllContent/app.py deleted file mode 100644 index 3ffb6528206ba2a1d8371ac798679730fcf93401..0000000000000000000000000000000000000000 --- a/spaces/awacke1/GetAllContent/app.py +++ /dev/null @@ -1,64 +0,0 @@ -import streamlit as st -import requests -from bs4 import BeautifulSoup -import os -import urllib -import base64 - -EXCLUDED_FILES = ['app.py', 'requirements.txt', 'pre-requirements.txt', 'packages.txt', 'README.md','.gitattributes', "backup.py","Dockerfile"] - -def download_file(url, local_filename): - if url.startswith('http://') or url.startswith('https://'): - try: - with requests.get(url, stream=True) as r: - r.raise_for_status() - with open(local_filename, 'wb') as f: - for chunk in r.iter_content(chunk_size=8192): - f.write(chunk) - return local_filename - except requests.exceptions.HTTPError as err: - print(f"HTTP error occurred: {err}") - -def download_html_and_files(url): - html_content = requests.get(url).text - soup = BeautifulSoup(html_content, 'html.parser') - - base_url = urllib.parse.urlunparse(urllib.parse.urlparse(url)._replace(path='', params='', query='', fragment='')) - - for link in soup.find_all('a'): - file_url = urllib.parse.urljoin(base_url, link.get('href')) - local_filename = urllib.parse.urlparse(file_url).path.split('/')[-1] - if local_filename: - link['href'] = local_filename - download_file(file_url, local_filename) - - with open("index.html", "w") as file: - file.write(str(soup)) - -def list_files(directory_path='.'): - files = [f for f in os.listdir(directory_path) if os.path.isfile(os.path.join(directory_path, f))] - return [f for f in files if f not in EXCLUDED_FILES] - -def get_download_link(file): - with open(file, "rb") as f: - bytes = f.read() - b64 = base64.b64encode(bytes).decode() - href = f'Click to download {file}' - return href - -def show_download_links(): - st.sidebar.write('Here are the files you can download:') - for file in list_files(): - st.sidebar.markdown(get_download_link(file), unsafe_allow_html=True) - -def main(): - st.sidebar.title('Bulk Download Tool') - url = st.sidebar.text_input('Please enter a URL to bulk download text and files') - if st.sidebar.button('📥 Get All the Content'): - download_html_and_files(url) - show_download_links() - if st.sidebar.button('📂 Show Download Links'): - show_download_links() - -if __name__ == "__main__": - main() diff --git a/spaces/awacke1/HTML5-ThreeJS-3D/README.md b/spaces/awacke1/HTML5-ThreeJS-3D/README.md deleted file mode 100644 index 570aafad424a5662e31b11fda916ab942c3bb754..0000000000000000000000000000000000000000 --- a/spaces/awacke1/HTML5-ThreeJS-3D/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: HTML5 ThreeJS 3D -emoji: 🏢 -colorFrom: red -colorTo: indigo -sdk: static -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Intrinsic.Bias.Analyzer/README.md b/spaces/awacke1/Intrinsic.Bias.Analyzer/README.md deleted file mode 100644 index 31548827b8bf9de4f3d543db9c38828770b31499..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Intrinsic.Bias.Analyzer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Intrinsic.Bias.Analyzer -emoji: 📈 -colorFrom: gray -colorTo: green -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/LED-Long-Form-SummariesBeamLengthTokenRepNgramVariantsTDDGradio/README.md b/spaces/awacke1/LED-Long-Form-SummariesBeamLengthTokenRepNgramVariantsTDDGradio/README.md deleted file mode 100644 index c9d55a0f95f0c5523cc5bff7649137d7e29febc8..0000000000000000000000000000000000000000 --- a/spaces/awacke1/LED-Long-Form-SummariesBeamLengthTokenRepNgramVariantsTDDGradio/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 🧠LED Long Form SummariesBeamLengthTokenRepNgramVariantsTDDGradio📖 -emoji: 🧠📖 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.1.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Write-Stories-Using-Bloom/README.md b/spaces/awacke1/Write-Stories-Using-Bloom/README.md deleted file mode 100644 index 109f977a640adb0e1a2d88b0fd1e33b996fd2105..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Write-Stories-Using-Bloom/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Write Stories Using Bloom -emoji: 🌸 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.0.25 -app_file: app.py -pinned: false -license: gpl -duplicated_from: EuroPython2022/Write-Stories-Using-Bloom ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awen666/web-ui/_next/static/hzuToYh76GqB3K_SxnpFb/_buildManifest.js b/spaces/awen666/web-ui/_next/static/hzuToYh76GqB3K_SxnpFb/_buildManifest.js deleted file mode 100644 index 8104b5ff533aabdafc3b1fddf10674f335c1d308..0000000000000000000000000000000000000000 --- a/spaces/awen666/web-ui/_next/static/hzuToYh76GqB3K_SxnpFb/_buildManifest.js +++ /dev/null @@ -1 +0,0 @@ -self.__BUILD_MANIFEST={__rewrites:{beforeFiles:[],afterFiles:[],fallback:[]},"/_error":["static/chunks/pages/_error-87afbe7e3d327810.js"],sortedPages:["/_app","/_error"]},self.__BUILD_MANIFEST_CB&&self.__BUILD_MANIFEST_CB(); \ No newline at end of file diff --git a/spaces/badayvedat/LLaVA/llava/eval/eval_science_qa_gpt4.py b/spaces/badayvedat/LLaVA/llava/eval/eval_science_qa_gpt4.py deleted file mode 100644 index c2ff17c915481fb556aba6ec816a9e08f519c515..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/LLaVA/llava/eval/eval_science_qa_gpt4.py +++ /dev/null @@ -1,104 +0,0 @@ -import argparse -import json -import os -import re -import random -from collections import defaultdict - - -def get_args(): - parser = argparse.ArgumentParser() - parser.add_argument('--base-dir', type=str) - parser.add_argument('--gpt4-result', type=str) - parser.add_argument('--our-result', type=str) - parser.add_argument('--split', type=str, default='test') - parser.add_argument('--options', type=list, default=["A", "B", "C", "D", "E"]) - return parser.parse_args() - - -def convert_caps(results): - fakecaps = [] - for result in results: - image_id = result['question_id'] - caption = result['text'] - fakecaps.append({"image_id": int(image_id), "caption": caption}) - return fakecaps - - -def get_pred_idx(prediction, choices, options): - """ - Get the index (e.g. 2) from the prediction (e.g. 'C') - """ - if prediction in options[:len(choices)]: - return options.index(prediction) - else: - return random.choice(range(len(choices))) - - -if __name__ == "__main__": - args = get_args() - - base_dir = args.base_dir - split_indices = json.load(open(os.path.join(base_dir, "pid_splits.json")))[args.split] - problems = json.load(open(os.path.join(base_dir, "problems.json"))) - our_predictions = [json.loads(line) for line in open(args.our_result)] - our_predictions = {pred['question_id']: pred for pred in our_predictions} - split_problems = {idx: problems[idx] for idx in split_indices} - - gpt4_predictions = json.load(open(args.gpt4_result))['outputs'] - - results = defaultdict(lambda: 0) - - for prob_id, prob in split_problems.items(): - if prob_id not in our_predictions: - continue - if prob_id not in gpt4_predictions: - continue - our_pred = our_predictions[prob_id]['text'] - gpt4_pred = gpt4_predictions[prob_id] - - pattern = re.compile(r'The answer is ([A-Z]).') - our_res = pattern.findall(our_pred) - if len(our_res) == 1: - our_answer = our_res[0] # 'A', 'B', ... - else: - our_answer = "FAILED" - gpt4_res = pattern.findall(gpt4_pred) - if len(gpt4_res) == 1: - gpt4_answer = gpt4_res[0] # 'A', 'B', ... - else: - gpt4_answer = "FAILED" - - our_pred_idx = get_pred_idx(our_answer, prob['choices'], args.options) - gpt4_pred_idx = get_pred_idx(gpt4_answer, prob['choices'], args.options) - - if gpt4_answer == 'FAILED': - results['gpt4_failed'] += 1 - # continue - gpt4_pred_idx = our_pred_idx - # if our_pred_idx != prob['answer']: - # print(our_predictions[prob_id]['prompt']) - # print('-----------------') - # print(f'LECTURE: {prob["lecture"]}') - # print(f'SOLUTION: {prob["solution"]}') - # print('=====================') - else: - # continue - pass - # gpt4_pred_idx = our_pred_idx - - if gpt4_pred_idx == prob['answer']: - results['correct'] += 1 - else: - results['incorrect'] += 1 - - - if gpt4_pred_idx == prob['answer'] or our_pred_idx == prob['answer']: - results['correct_upperbound'] += 1 - - correct = results['correct'] - total = results['correct'] + results['incorrect'] - print(f'Total: {total}, Correct: {correct}, Accuracy: {correct / total * 100:.2f}%') - print(f'Total: {total}, Correct (upper): {results["correct_upperbound"]}, Accuracy: {results["correct_upperbound"] / total * 100:.2f}%') - print(f'Total: {total}, GPT-4 NO-ANS (RANDOM): {results["gpt4_failed"]}, Percentage: {results["gpt4_failed"] / total * 100:.2f}%') - diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/DepthLimitedBlurShader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/DepthLimitedBlurShader.js deleted file mode 100644 index 816c14796f0e6b7e0baeb85eda2495deacbffe33..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/DepthLimitedBlurShader.js +++ /dev/null @@ -1,155 +0,0 @@ -THREE.DepthLimitedBlurShader = { - defines: { - 'KERNEL_RADIUS': 4, - 'DEPTH_PACKING': 1, - 'PERSPECTIVE_CAMERA': 1 - }, - uniforms: { - 'tDiffuse': { type: 't', value: null }, - 'size': { type: 'v2', value: new THREE.Vector2( 512, 512 ) }, - 'sampleUvOffsets': { type: 'v2v', value: [ new THREE.Vector2( 0, 0 ) ] }, - 'sampleWeights': { type: '1fv', value: [ 1.0 ] }, - 'tDepth': { type: 't', value: null }, - 'cameraNear': { type: 'f', value: 10 }, - 'cameraFar': { type: 'f', value: 1000 }, - 'depthCutoff': { type: 'f', value: 10 }, - }, - vertexShader: [ - "#include ", - - "uniform vec2 size;", - - "varying vec2 vUv;", - "varying vec2 vInvSize;", - - "void main() {", - " vUv = uv;", - " vInvSize = 1.0 / size;", - - " gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );", - "}" - - ].join( "\n" ), - fragmentShader: [ - "#include ", - "#include ", - - "uniform sampler2D tDiffuse;", - "uniform sampler2D tDepth;", - - "uniform float cameraNear;", - "uniform float cameraFar;", - "uniform float depthCutoff;", - - "uniform vec2 sampleUvOffsets[ KERNEL_RADIUS + 1 ];", - "uniform float sampleWeights[ KERNEL_RADIUS + 1 ];", - - "varying vec2 vUv;", - "varying vec2 vInvSize;", - - "float getDepth( const in vec2 screenPosition ) {", - " #if DEPTH_PACKING == 1", - " return unpackRGBAToDepth( texture2D( tDepth, screenPosition ) );", - " #else", - " return texture2D( tDepth, screenPosition ).x;", - " #endif", - "}", - - "float getViewZ( const in float depth ) {", - " #if PERSPECTIVE_CAMERA == 1", - " return perspectiveDepthToViewZ( depth, cameraNear, cameraFar );", - " #else", - " return orthographicDepthToViewZ( depth, cameraNear, cameraFar );", - " #endif", - "}", - - "void main() {", - " float depth = getDepth( vUv );", - " if( depth >= ( 1.0 - EPSILON ) ) {", - " discard;", - " }", - - " float centerViewZ = -getViewZ( depth );", - " bool rBreak = false, lBreak = false;", - - " float weightSum = sampleWeights[0];", - " vec4 diffuseSum = texture2D( tDiffuse, vUv ) * weightSum;", - - " for( int i = 1; i <= KERNEL_RADIUS; i ++ ) {", - - " float sampleWeight = sampleWeights[i];", - " vec2 sampleUvOffset = sampleUvOffsets[i] * vInvSize;", - - " vec2 sampleUv = vUv + sampleUvOffset;", - " float viewZ = -getViewZ( getDepth( sampleUv ) );", - - " if( abs( viewZ - centerViewZ ) > depthCutoff ) rBreak = true;", - - " if( ! rBreak ) {", - " diffuseSum += texture2D( tDiffuse, sampleUv ) * sampleWeight;", - " weightSum += sampleWeight;", - " }", - - " sampleUv = vUv - sampleUvOffset;", - " viewZ = -getViewZ( getDepth( sampleUv ) );", - - " if( abs( viewZ - centerViewZ ) > depthCutoff ) lBreak = true;", - - " if( ! lBreak ) {", - " diffuseSum += texture2D( tDiffuse, sampleUv ) * sampleWeight;", - " weightSum += sampleWeight;", - " }", - - " }", - - " gl_FragColor = diffuseSum / weightSum;", - "}" - ].join( "\n" ) -}; - -THREE.BlurShaderUtils = { - - createSampleWeights: function ( kernelRadius, stdDev ) { - - var gaussian = function ( x, stdDev ) { - - return Math.exp( - ( x * x ) / ( 2.0 * ( stdDev * stdDev ) ) ) / ( Math.sqrt( 2.0 * Math.PI ) * stdDev ); - - }; - - var weights = []; - - for ( var i = 0; i <= kernelRadius; i ++ ) { - - weights.push( gaussian( i, stdDev ) ); - - } - - return weights; - - }, - - createSampleOffsets: function ( kernelRadius, uvIncrement ) { - - var offsets = []; - - for ( var i = 0; i <= kernelRadius; i ++ ) { - - offsets.push( uvIncrement.clone().multiplyScalar( i ) ); - - } - - return offsets; - - }, - - configure: function ( material, kernelRadius, stdDev, uvIncrement ) { - - material.defines[ 'KERNEL_RADIUS' ] = kernelRadius; - material.uniforms[ 'sampleUvOffsets' ].value = THREE.BlurShaderUtils.createSampleOffsets( kernelRadius, uvIncrement ); - material.uniforms[ 'sampleWeights' ].value = THREE.BlurShaderUtils.createSampleWeights( kernelRadius, stdDev ); - material.needsUpdate = true; - - } - -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/clipping_planes_vertex.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/clipping_planes_vertex.glsl.js deleted file mode 100644 index 68862d3b1af8056adccb782e3b6fa2d70865b51d..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/clipping_planes_vertex.glsl.js +++ /dev/null @@ -1,5 +0,0 @@ -export default /* glsl */` -#if NUM_CLIPPING_PLANES > 0 && ! defined( PHYSICAL ) && ! defined( PHONG ) && ! defined( MATCAP ) - vViewPosition = - mvPosition.xyz; -#endif -`; diff --git a/spaces/batuhantosun/Guided-Backpropagation/README.md b/spaces/batuhantosun/Guided-Backpropagation/README.md deleted file mode 100644 index f479a9d469871d1b03affdb18b521c80afb4e7c9..0000000000000000000000000000000000000000 --- a/spaces/batuhantosun/Guided-Backpropagation/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Guided Backpropagation -metaTitle: Guided Backpropagation on Hugging Face -emoji: 🍇 -colorFrom: yellow -colorTo: green -pinned: true -license: mit -sdk: gradio -app_file: app.py ---- \ No newline at end of file diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/archs/discriminator_arch.py b/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/archs/discriminator_arch.py deleted file mode 100644 index 870acec30509cef3658017b64d79365110b62a36..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/archs/discriminator_arch.py +++ /dev/null @@ -1,85 +0,0 @@ -from torch import nn as nn - -from basicsr.utils.registry import ARCH_REGISTRY - - -@ARCH_REGISTRY.register() -class VGGStyleDiscriminator(nn.Module): - """VGG style discriminator with input size 128 x 128 or 256 x 256. - - It is used to train SRGAN, ESRGAN, and VideoGAN. - - Args: - num_in_ch (int): Channel number of inputs. Default: 3. - num_feat (int): Channel number of base intermediate features.Default: 64. - """ - - def __init__(self, num_in_ch, num_feat, input_size=128): - super(VGGStyleDiscriminator, self).__init__() - self.input_size = input_size - assert self.input_size == 128 or self.input_size == 256, ( - f'input size must be 128 or 256, but received {input_size}') - - self.conv0_0 = nn.Conv2d(num_in_ch, num_feat, 3, 1, 1, bias=True) - self.conv0_1 = nn.Conv2d(num_feat, num_feat, 4, 2, 1, bias=False) - self.bn0_1 = nn.BatchNorm2d(num_feat, affine=True) - - self.conv1_0 = nn.Conv2d(num_feat, num_feat * 2, 3, 1, 1, bias=False) - self.bn1_0 = nn.BatchNorm2d(num_feat * 2, affine=True) - self.conv1_1 = nn.Conv2d(num_feat * 2, num_feat * 2, 4, 2, 1, bias=False) - self.bn1_1 = nn.BatchNorm2d(num_feat * 2, affine=True) - - self.conv2_0 = nn.Conv2d(num_feat * 2, num_feat * 4, 3, 1, 1, bias=False) - self.bn2_0 = nn.BatchNorm2d(num_feat * 4, affine=True) - self.conv2_1 = nn.Conv2d(num_feat * 4, num_feat * 4, 4, 2, 1, bias=False) - self.bn2_1 = nn.BatchNorm2d(num_feat * 4, affine=True) - - self.conv3_0 = nn.Conv2d(num_feat * 4, num_feat * 8, 3, 1, 1, bias=False) - self.bn3_0 = nn.BatchNorm2d(num_feat * 8, affine=True) - self.conv3_1 = nn.Conv2d(num_feat * 8, num_feat * 8, 4, 2, 1, bias=False) - self.bn3_1 = nn.BatchNorm2d(num_feat * 8, affine=True) - - self.conv4_0 = nn.Conv2d(num_feat * 8, num_feat * 8, 3, 1, 1, bias=False) - self.bn4_0 = nn.BatchNorm2d(num_feat * 8, affine=True) - self.conv4_1 = nn.Conv2d(num_feat * 8, num_feat * 8, 4, 2, 1, bias=False) - self.bn4_1 = nn.BatchNorm2d(num_feat * 8, affine=True) - - if self.input_size == 256: - self.conv5_0 = nn.Conv2d(num_feat * 8, num_feat * 8, 3, 1, 1, bias=False) - self.bn5_0 = nn.BatchNorm2d(num_feat * 8, affine=True) - self.conv5_1 = nn.Conv2d(num_feat * 8, num_feat * 8, 4, 2, 1, bias=False) - self.bn5_1 = nn.BatchNorm2d(num_feat * 8, affine=True) - - self.linear1 = nn.Linear(num_feat * 8 * 4 * 4, 100) - self.linear2 = nn.Linear(100, 1) - - # activation function - self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) - - def forward(self, x): - assert x.size(2) == self.input_size, (f'Input size must be identical to input_size, but received {x.size()}.') - - feat = self.lrelu(self.conv0_0(x)) - feat = self.lrelu(self.bn0_1(self.conv0_1(feat))) # output spatial size: /2 - - feat = self.lrelu(self.bn1_0(self.conv1_0(feat))) - feat = self.lrelu(self.bn1_1(self.conv1_1(feat))) # output spatial size: /4 - - feat = self.lrelu(self.bn2_0(self.conv2_0(feat))) - feat = self.lrelu(self.bn2_1(self.conv2_1(feat))) # output spatial size: /8 - - feat = self.lrelu(self.bn3_0(self.conv3_0(feat))) - feat = self.lrelu(self.bn3_1(self.conv3_1(feat))) # output spatial size: /16 - - feat = self.lrelu(self.bn4_0(self.conv4_0(feat))) - feat = self.lrelu(self.bn4_1(self.conv4_1(feat))) # output spatial size: /32 - - if self.input_size == 256: - feat = self.lrelu(self.bn5_0(self.conv5_0(feat))) - feat = self.lrelu(self.bn5_1(self.conv5_1(feat))) # output spatial size: / 64 - - # spatial size: (4, 4) - feat = feat.view(feat.size(0), -1) - feat = self.lrelu(self.linear1(feat)) - out = self.linear2(feat) - return out diff --git a/spaces/bigcode/santacoder-search/app.py b/spaces/bigcode/santacoder-search/app.py deleted file mode 100644 index 9cf178b0a9884bdd7d80e24d209009de47784b01..0000000000000000000000000000000000000000 --- a/spaces/bigcode/santacoder-search/app.py +++ /dev/null @@ -1,103 +0,0 @@ -import http.client as http_client -import json -import logging -import os -import re -import string - -import gradio as gr -import requests - - -def mark_tokens_bold(string, tokens): - for token in tokens: - pattern = re.escape(token) #r"\b" + re.escape(token) + r"\b" - string = re.sub(pattern, "" + token + "", string) - return string - - -def process_results(results, highlight_terms): - if len(results) == 0: - return """

No results retrieved.



""" - - results_html = "" - for result in results: - text_html = result["text"] - text_html = mark_tokens_bold(text_html, highlight_terms) - - docid_html = str(result["docid"]) - - licenses = " | ".join(result["repo_license"]) - repo_name = result["repo_name"] - repo_path = result["repo_path"] - - results_html += """\ -

Repository name: {}

-

Repository path: {}

-

Repository licenses: {}

-
-
{}
-
-
-
- """.format(repo_name, repo_path, licenses, text_html) - return results_html - - -def scisearch(query, language, num_results=10): - - query = " ".join(query.split()) - if query == "" or query is None: - return "" - - post_data = {"query": query, "k": num_results} - - output = requests.post( - os.environ.get("address"), - headers={"Content-type": "application/json"}, - data=json.dumps(post_data), - timeout=60, - ) - - payload = json.loads(output.text) - - results = payload["results"] - highlight_terms = payload["highlight_terms"] - return process_results(results, highlight_terms) - - -description = """#

🎅 SantaCoder: Dataset Search 🔍

-When you use SantaCoder to generate code it might produce exact copies of code in the pretraining dataset. -In that case, the code license might have requirements to comply with. -With this search tool we aim to provide help to find out where the code came from, in order for the user to comply with licensing requirements in case the code produced by SantaCoder belongs to an already existing repository.""" - - -if __name__ == "__main__": - demo = gr.Blocks( - css=".gradio-container {background-color: #20233fff; color:white}" - ) - - with demo: - with gr.Row(): - gr.Markdown(value=description) - with gr.Row(): - query = gr.Textbox(lines=5, placeholder="Type your query here...", label="Query") - with gr.Row(): - k = gr.Slider(1, 100, value=10, step=1, label="Max Results") - with gr.Row(): - submit_btn = gr.Button("Submit") - with gr.Row(): - results = gr.HTML(label="Results", value="contact") - - def submit(query, k, lang="en"): - query = query.strip() - if query is None or query == "": - return "", "" - return { - results: scisearch(query, lang, k), - } - - query.submit(fn=submit, inputs=[query, k], outputs=[results]) - submit_btn.click(submit, inputs=[query, k], outputs=[results]) - - demo.launch(enable_queue=True, debug=True) diff --git a/spaces/bioriAsaeru/text-to-voice/Didi Hollywood 720p Or 1080p Drivers Tupac Starga ((FULL)).md b/spaces/bioriAsaeru/text-to-voice/Didi Hollywood 720p Or 1080p Drivers Tupac Starga ((FULL)).md deleted file mode 100644 index 5a1aabb57f7718853bc6a42874aa56e4f38f0559..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Didi Hollywood 720p Or 1080p Drivers Tupac Starga ((FULL)).md +++ /dev/null @@ -1,6 +0,0 @@ -

Didi Hollywood 720p Or 1080p drivers tupac starga


Download Filehttps://urloso.com/2uyPCC



- - aaccfb2cb3
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/Hirens.BootCD.9.1.iso.md b/spaces/bioriAsaeru/text-to-voice/Hirens.BootCD.9.1.iso.md deleted file mode 100644 index fa33863007a8eab4b82869c62ddf67a7282f831b..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Hirens.BootCD.9.1.iso.md +++ /dev/null @@ -1,18 +0,0 @@ -

Hirens.BootCD.9.1.iso


Download Ziphttps://urloso.com/2uyRVT



- -Local Filename, Hirens.BootCD.9.1.iso. Filesize, 76248156 bytes. Bootable flag, no. Boot(1) flag, no. Boot sector flag, no. Locating partition, 0x80. - -ScreenshotLung transplantation and pregnancy in the female lung transplant recipient. - -Pregnancy following lung transplantation is rare but can be very successful in terms of maternal and neonatal outcomes. For women with a history of lung transplantation, pregnancy management is complex and requires careful coordination of lung transplant and obstetric care. The perinatal and maternal risks and outcomes are described. The impact of maternal rejection on pregnancy outcome is not clear. A large retrospective case series of pregnancies following lung transplantation was completed. Data were collected and analyzed for maternal and fetal outcomes. Data included maternal demographics, treatment, pregnancy outcome, and maternal and neonatal outcomes. Pregnancies were classified as uncomplicated, complicated, or not possible. Outcomes were compared by mode of delivery, chorioamnionitis, and maternal rejection. Of the 19 pregnancies in 11 women, 12 (63%) were uncomplicated, four (21%) were complicated, and three (16%) were not possible. Three (16%) women were treated with immunosuppressive agents. Maternal outcomes included two pulmonary complications (each requiring hospitalization), and one pregnancy-related death. The three neonates who died were born to two women with bronchiolitis obliterans syndrome. Two women had the procedure aborted at 21 and 22 weeks' gestation because of recurrence of maternal rejection. Pregnancy in the female lung transplant recipient is possible and can have a successful outcome. However, the perinatal risk in this group of women should not be underestimated. The mother and fetus should be counseled regarding the risk of maternal rejection and the increased risk of prematurity.Q: - -CSS navigation bar with background image on mobile and desktop - -I'm trying to make a navigation bar for my website. So far I've got the navigation bar on my desktop and mobile and that's fine, the problem is, when I'm on desktop I want to display a picture with a background image and when I'm on mobile I want to display just a text. This is what I've got: - -HTML - - - ))} - - ) -} diff --git a/spaces/hhhhardman/VITS/utils.py b/spaces/hhhhardman/VITS/utils.py deleted file mode 100644 index 9794e0fc3463a5e8fad05c037cce64683059a6d3..0000000000000000000000000000000000000000 --- a/spaces/hhhhardman/VITS/utils.py +++ /dev/null @@ -1,226 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.ERROR) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() \ No newline at end of file diff --git a/spaces/huaiji3y/bingo-Public/src/lib/bots/bing/sr.ts b/spaces/huaiji3y/bingo-Public/src/lib/bots/bing/sr.ts deleted file mode 100644 index 7cae14da7362bd6cc1e234851c11ca67e5a99f0c..0000000000000000000000000000000000000000 --- a/spaces/huaiji3y/bingo-Public/src/lib/bots/bing/sr.ts +++ /dev/null @@ -1,106 +0,0 @@ -// @ts-ignore -const SpeechRecognitionPolyfill: typeof webkitSpeechRecognition = typeof window !== 'undefined' ? ( - // @ts-ignore - window.SpeechRecognition || - window.webkitSpeechRecognition || - // @ts-ignore - window.mozSpeechRecognition || - // @ts-ignore - window.msSpeechRecognition || - // @ts-ignore - window.oSpeechRecognition -) as typeof webkitSpeechRecognition : undefined - -type subscriber = (msg: string, command?: string) => void - -export class SR { - recognition?: SpeechRecognition - onchange?: subscriber - transcript: boolean = false - listening: boolean = false - private commandsRe?: RegExp - constructor(commands: string[]) { - this.recognition = SpeechRecognitionPolyfill ? new SpeechRecognitionPolyfill() : undefined - if (!this.recognition) { - return - } - this.configuration('zh-CN') - if (commands.length) { - this.commandsRe = new RegExp(`^(${commands.join('|')})。?$`) - } - this.recognition.onresult = this.speechRecognition - this.recognition.onerror = (err) => { - console.log('err', err.error) - this.stop() - } - this.recognition.onend = () => { - if (this.recognition && this.listening) { - this.recognition.start() - } - } - } - - speechRecognition = (event: SpeechRecognitionEvent) => { - if (!this.listening) return - for (var i = event.resultIndex; i < event.results.length; i++) { - let result = event.results[i] - if (result.isFinal) { - var alt = result[0] - const text = alt.transcript.trim() - if (this.commandsRe && this.commandsRe.test(text)) { - return this.onchange?.('', RegExp.$1) - } - if (!this.transcript) return - this.onchange?.(text) - } - } - } - - private configuration = async (lang: string = 'zh-CN') => { - return new Promise((resolve) => { - if (this.recognition) { - this.recognition.continuous = true - this.recognition.lang = lang - this.recognition.onstart = resolve - } - }) - } - - start = async () => { - if (this.recognition && !this.listening) { - await this.recognition.start() - this.transcript = true - this.listening = true - } - } - - stop = () => { - if (this.recognition) { - this.recognition.stop() - this.transcript = false - this.listening = false - } - } - - - pause = () => { - if (this.recognition) { - this.transcript = false - } - } - - resume = () => { - if (this.recognition) { - this.transcript = true - } - } - - abort = () => { - if (this.recognition && this.transcript) { - this.recognition.abort() - this.transcript = false - this.listening = false - } - } -} - diff --git a/spaces/huaiji3y/bingo-Public/src/lib/hooks/use-copy-to-clipboard.tsx b/spaces/huaiji3y/bingo-Public/src/lib/hooks/use-copy-to-clipboard.tsx deleted file mode 100644 index 62f7156dca246c46b213151af003a3a177977ccf..0000000000000000000000000000000000000000 --- a/spaces/huaiji3y/bingo-Public/src/lib/hooks/use-copy-to-clipboard.tsx +++ /dev/null @@ -1,33 +0,0 @@ -'use client' - -import * as React from 'react' - -export interface useCopyToClipboardProps { - timeout?: number -} - -export function useCopyToClipboard({ - timeout = 2000 -}: useCopyToClipboardProps) { - const [isCopied, setIsCopied] = React.useState(false) - - const copyToClipboard = (value: string) => { - if (typeof window === 'undefined' || !navigator.clipboard?.writeText) { - return - } - - if (!value) { - return - } - - navigator.clipboard.writeText(value).then(() => { - setIsCopied(true) - - setTimeout(() => { - setIsCopied(false) - }, timeout) - }) - } - - return { isCopied, copyToClipboard } -} diff --git a/spaces/huaiji3y/bingo-Public/src/pages/api/healthz.ts b/spaces/huaiji3y/bingo-Public/src/pages/api/healthz.ts deleted file mode 100644 index f6ae44ff0fd66ccd3f7feaa550025fbf2a83bf77..0000000000000000000000000000000000000000 --- a/spaces/huaiji3y/bingo-Public/src/pages/api/healthz.ts +++ /dev/null @@ -1,7 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - res.status(200).end('ok') -} diff --git a/spaces/huggingface-projects/easy-analysis/app.py b/spaces/huggingface-projects/easy-analysis/app.py deleted file mode 100644 index 2a517d0fcd1736c52d0181b6531eb46a99f65759..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/easy-analysis/app.py +++ /dev/null @@ -1,135 +0,0 @@ -import gradio as gr -import pandas as pd -from huggingface_hub.hf_api import create_repo, upload_file, HfApi -from huggingface_hub.repository import Repository -import subprocess -import os -import tempfile -import sweetviz as sv - -def analyze_datasets(dataset, dataset_name, token, column=None, pairwise="off"): - df = pd.read_csv(dataset.name) - username = HfApi().whoami(token=token)["name"] - if column is not None: - analyze_report = sv.analyze(df, target_feat=column, pairwise_analysis=pairwise) - else: - analyze_report = sv.analyze(df, pairwise_analysis=pairwise) - analyze_report.show_html('./index.html', open_browser=False) - repo_url = create_repo(f"{username}/{dataset_name}", repo_type = "space", token = token, space_sdk = "static", private=False) - - upload_file(path_or_fileobj ="./index.html", path_in_repo = "index.html", repo_id =f"{username}/{dataset_name}", repo_type = "space", token=token) - readme = f"---\ntitle: {dataset_name}\nemoji: ✨\ncolorFrom: green\ncolorTo: red\nsdk: static\npinned: false\ntags:\n- dataset-report\n---" - with open("README.md", "w+") as f: - f.write(readme) - upload_file(path_or_fileobj ="./README.md", path_in_repo = "README.md", repo_id =f"{username}/{dataset_name}", repo_type = "space", token=token) - - return f"Your dataset report will be ready at {repo_url}" - -def compare_column_values(dataset, dataset_name, token, column, category): - - df = pd.read_csv(dataset.name) - username = HfApi().whoami(token=token)["name"] - arr = df[column].unique() - arr = list(arr[arr != column]) - compare_report = sv.compare_intra(df, df[column] == category, arr[0]) - compare_report.show_html('./index.html', open_browser=False) - - repo_url = create_repo(f"{username}/{dataset_name}", repo_type = "space", token = token, space_sdk = "static", private=False) - - upload_file(path_or_fileobj ="./index.html", path_in_repo = "index.html", repo_id =f"{username}/{dataset_name}", repo_type = "space", token=token) - readme = f"---\ntitle: {dataset_name}\nemoji: ✨\ncolorFrom: green\ncolorTo: red\nsdk: static\npinned: false\ntags:\n- dataset-report\n---" - with open("README.md", "w+") as f: - f.write(readme) - upload_file(path_or_fileobj ="./README.md", path_in_repo = "README.md", repo_id =f"{username}/{dataset_name}", repo_type = "space", token=token) - - return f"Your dataset report will be ready at {repo_url}" - -def compare_dataset_splits(dataset, dataset_name, token, splits): - df = pd.read_csv(dataset.name) - train = df.sample(frac=splits) - test = df.loc[df.index.difference(train.index)] - username = HfApi().whoami(token=token)["name"] - - compare_report = sv.compare([train, "Training Data"], [test, "Test Data"]) - compare_report.show_html('./index.html', open_browser=False) - - repo_url = create_repo(f"{username}/{dataset_name}", repo_type = "space", token = token, space_sdk = "static", private=False) - - upload_file(path_or_fileobj ="./index.html", path_in_repo = "index.html", repo_id =f"{username}/{dataset_name}", repo_type = "space", token=token) - readme = f"---\ntitle: {dataset_name}\nemoji: ✨\ncolorFrom: green\ncolorTo: red\nsdk: static\npinned: false\ntags:\n- dataset-report\n---" - with open("README.md", "w+") as f: - f.write(readme) - upload_file(path_or_fileobj ="./README.md", path_in_repo = "README.md", repo_id =f"{username}/{dataset_name}", repo_type = "space", token=token) - - return f"Your dataset report will be ready at {repo_url}" - - - -with gr.Blocks() as demo: - main_title = gr.Markdown("""# Easy Analysis🪄🌟✨""") - main_desc = gr.Markdown("""This app enables you to run three type of dataset analysis and pushes the interactive reports to your Hugging Face Hub profile as a Space. It uses SweetViz in the back.""") - with gr.Tabs(): - with gr.TabItem("Analyze") as analyze: - with gr.Row(): - with gr.Column(): - title = gr.Markdown(""" ## Analyze Dataset """) - description = gr.Markdown("Analyze a dataset or predictive variables against a target variable in a dataset (enter a column name to column section if you want to compare against target value). You can also do pairwise analysis, but it has quadratic complexity.") - dataset = gr.File(label = "Dataset") - column = gr.Text(label = "Compare dataset against a target variable (Optional)") - pairwise = gr.Radio(["off", "on"], label = "Enable pairwise analysis") - token = gr.Textbox(label = "Your Hugging Face Token") - dataset_name = gr.Textbox(label = "Dataset Name") - pushing_desc = gr.Markdown("This app needs your Hugging Face Hub token and a unique name for your dataset report.") - inference_run = gr.Button("Infer") - inference_progress = gr.StatusTracker(cover_container=True) - outcome = gr.outputs.Textbox() - inference_run.click( - analyze_datasets, - inputs=[dataset, dataset_name, token, column, pairwise], - outputs=outcome, - status_tracker=inference_progress, - ) - with gr.TabItem("Compare Splits") as compare_splits: - with gr.Row(): - with gr.Column(): - title = gr.Markdown(""" ## Compare Splits""") - description = gr.Markdown("Split a dataset and compare splits. You need to give a fraction, e.g. 0.8.") - dataset = gr.File(label = "Dataset") - split_ratio = gr.Number(label = "Split Ratios") - pushing_desc = gr.Markdown("This app needs your Hugging Face Hub token and a unique name for your dataset report.") - token = gr.Textbox(label = "Your Hugging Face Token") - dataset_name = gr.Textbox(label = "Dataset Name") - inference_run = gr.Button("Infer") - inference_progress = gr.StatusTracker(cover_container=True) - - outcome = gr.outputs.Textbox() - inference_run.click( - compare_dataset_splits, - inputs=[dataset, dataset_name, token, split_ratio], - outputs=outcome, - status_tracker=inference_progress, - ) - - with gr.TabItem("Compare Subsets") as compare_subsets: - with gr.Row(): - with gr.Column(): - title = gr.Markdown(""" ## Compare Subsets""") - description = gr.Markdown("Compare subsets of a dataset, e.g. you can pick Age Group column and compare adult category against young.") - dataset = gr.File(label = "Dataset") - column = gr.Text(label = "Enter column:") - category = gr.Text(label = "Enter category:") - pushing_desc = gr.Markdown("This app needs your Hugging Face Hub token and a unique name for your dataset report.") - token = gr.Textbox(label = "Your Hugging Face Token") - dataset_name = gr.Textbox(label = "Dataset Name") - inference_run = gr.Button("Run Analysis") - inference_progress = gr.StatusTracker(cover_container=True) - - outcome = gr.outputs.Textbox() - inference_run.click( - compare_column_values, - inputs=[dataset, dataset_name, token, column, category ], - outputs=outcome, - status_tracker=inference_progress, - ) - -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/huggingface-projects/huggingbots/codellama.py b/spaces/huggingface-projects/huggingbots/codellama.py deleted file mode 100644 index c7db97465cbd0c9ff144d690aa4bd99d87e5859f..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/huggingbots/codellama.py +++ /dev/null @@ -1,139 +0,0 @@ -import asyncio -import json -import os - -from gradio_client import Client - -HF_TOKEN = os.getenv("HF_TOKEN") - -codellama = Client("https://huggingface-projects-codellama-13b-chat.hf.space/", HF_TOKEN) - -BOT_USER_ID = 1102236653545861151 # real -CODELLAMA_CHANNEL_ID = 1147210106321256508 # real - - -codellama_threadid_userid_dictionary = {} -codellama_threadid_conversation = {} - - -def codellama_initial_generation(prompt, thread): - """job.submit inside of run_in_executor = more consistent bot behavior""" - global codellama_threadid_conversation - - chat_history = f"{thread.id}.json" - conversation = [] - with open(chat_history, "w") as json_file: - json.dump(conversation, json_file) - - job = codellama.submit(prompt, chat_history, fn_index=0) - - while job.done() is False: - pass - else: - result = job.outputs()[-1] - with open(result, "r") as json_file: - data = json.load(json_file) - response = data[-1][-1] - conversation.append((prompt, response)) - with open(chat_history, "w") as json_file: - json.dump(conversation, json_file) - - codellama_threadid_conversation[thread.id] = chat_history - if len(response) > 1300: - response = response[:1300] + "...\nTruncating response due to discord api limits." - return response - - -async def try_codellama(ctx, prompt): - """Generates text based on a given prompt""" - try: - global codellama_threadid_userid_dictionary # tracks userid-thread existence - global codellama_threadid_conversation - - if ctx.author.id != BOT_USER_ID: - if ctx.channel.id == CODELLAMA_CHANNEL_ID: - message = await ctx.send(f"**{prompt}** - {ctx.author.mention}") - if len(prompt) > 99: - small_prompt = prompt[:99] - else: - small_prompt = prompt - thread = await message.create_thread(name=small_prompt, auto_archive_duration=60) - - loop = asyncio.get_running_loop() - output_code = await loop.run_in_executor(None, codellama_initial_generation, prompt, thread) - codellama_threadid_userid_dictionary[thread.id] = ctx.author.id - - print(output_code) - await thread.send(output_code) - except Exception as e: - print(f"try_codellama Error: {e}") - await ctx.send(f"Error: {e} <@811235357663297546> (try_codellama error)") - - -async def continue_codellama(message): - """Continues a given conversation based on chat_history""" - try: - if not message.author.bot: - global codellama_threadid_userid_dictionary # tracks userid-thread existence - if message.channel.id in codellama_threadid_userid_dictionary: # is this a valid thread? - if codellama_threadid_userid_dictionary[message.channel.id] == message.author.id: - print("Safetychecks passed for continue_codellama") - global codellama_threadid_conversation - - prompt = message.content - chat_history = codellama_threadid_conversation[message.channel.id] - - # Check to see if conversation is ongoing or ended (>15000 characters) - with open(chat_history, "r") as json_file: - conversation = json.load(json_file) - total_characters = 0 - for item in conversation: - for string in item: - total_characters += len(string) - - if total_characters < 15000: - if os.environ.get("TEST_ENV") == "True": - print("Running codellama.submit") - job = codellama.submit(prompt, chat_history, fn_index=0) - while job.done() is False: - pass - else: - if os.environ.get("TEST_ENV") == "True": - print("Continue_codellama job done") - - result = job.outputs()[-1] - with open(result, "r") as json_file: - data = json.load(json_file) - response = data[-1][-1] - - with open(chat_history, "r") as json_file: - conversation = json.load(json_file) - - conversation.append((prompt, response)) - # now we have prompt, response, and the newly updated full conversation - - with open(chat_history, "w") as json_file: - json.dump(conversation, json_file) - if os.environ.get("TEST_ENV") == "True": - print(prompt) - print(response) - print(conversation) - print(chat_history) - - codellama_threadid_conversation[message.channel.id] = chat_history - if len(response) > 1300: - response = response[:1300] + "...\nTruncating response due to discord api limits." - - await message.reply(response) - - total_characters = 0 - for item in conversation: - for string in item: - total_characters += len(string) - - if total_characters >= 15000: - await message.reply("Conversation ending due to length, feel free to start a new one!") - - except Exception as e: - print(f"continue_codellama Error: {e}") - await message.reply(f"Error: {e} <@811235357663297546> (continue_codellama error)") diff --git a/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/lib/liveblocks/useList.ts b/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/lib/liveblocks/useList.ts deleted file mode 100644 index bc9c8edd1c05332f304c1a033cf349fadd8daffa..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/stable-diffusion-multiplayer/frontend/src/lib/liveblocks/useList.ts +++ /dev/null @@ -1,47 +0,0 @@ -// @ts-nocheck -import { LiveList } from "@liveblocks/client"; -import { useStorage } from "./useStorage"; -import { onDestroy } from "svelte"; -import type { Writable } from "svelte/store"; -import { writable } from "svelte/store"; -import { useRoom } from "./useRoom"; - -/** - * Works similarly to `liveblocks-react` useList - * https://liveblocks.io/docs/api-reference/liveblocks-react#useList - * - * The main difference is that it returns a Svelte store: - * const list = useList() - * $list.push([{ item: 1 }]) - * console.log([...$list]) - */ -export function useList( - name: string, - initial?: any[] -): Writable> { - const room = useRoom(); - const rootStore = useStorage(); - const list = writable>(); - let unsubscribe = () => {}; - - const unsubscribeRoot = rootStore.subscribe((root) => { - if (!root) { - return; - } - - if (!root.get(name)) { - root.set(name, new LiveList(initial)); - } - - list.set(root.get(name)); - - unsubscribe(); - unsubscribe = room.subscribe(root.get(name) as LiveList, (newList) => { - list.set(newList); - }); - }); - - onDestroy(unsubscribeRoot); - - return list; -} diff --git a/spaces/huggingface-tools/text-download/README.md b/spaces/huggingface-tools/text-download/README.md deleted file mode 100644 index f7136605c253829e538066839a82a42b0becaa38..0000000000000000000000000000000000000000 --- a/spaces/huggingface-tools/text-download/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text Download -emoji: ⚡ -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -tags: -- tool ---- diff --git a/spaces/hysts/LoRA-SD-training/README.md b/spaces/hysts/LoRA-SD-training/README.md deleted file mode 100644 index b2f74f62ef61382573bd770780aaa14870db0b54..0000000000000000000000000000000000000000 --- a/spaces/hysts/LoRA-SD-training/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: LoRA + SD Training -emoji: 🏢 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hyxue/HiFiFace-inference-demo/models/gan_loss.py b/spaces/hyxue/HiFiFace-inference-demo/models/gan_loss.py deleted file mode 100644 index 28bf698f69c51bb206ee304f08e5d840eb7c76c7..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/models/gan_loss.py +++ /dev/null @@ -1,45 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class GANLoss(nn.Module): - def __init__(self, target_real_label=1.0, target_fake_label=0.0, tensor=torch.FloatTensor, opt=None): - super(GANLoss, self).__init__() - self.real_label = target_real_label - self.fake_label = target_fake_label - self.real_label_tensor = None - self.fake_label_tensor = None - self.zero_tensor = None - self.Tensor = tensor - self.opt = opt - - def get_target_tensor(self, input, target_is_real): - if target_is_real: - return torch.ones_like(input).detach() - else: - return torch.zeros_like(input).detach() - - def get_zero_tensor(self, input): - return torch.zeros_like(input).detach() - - def loss(self, inputs, target_is_real, for_discriminator=True): - target_tensor = self.get_target_tensor(inputs, target_is_real) - loss = F.binary_cross_entropy_with_logits(inputs, target_tensor) - return loss - - def __call__(self, inputs, target_is_real, for_discriminator=True): - # computing loss is a bit complicated because |input| may not be - # a tensor, but list of tensors in case of multiscale discriminator - if isinstance(inputs, list): - loss = 0 - for pred_i in inputs: - if isinstance(pred_i, list): - pred_i = pred_i[-1] - loss_tensor = self.loss(pred_i, target_is_real, for_discriminator) - bs = 1 if len(loss_tensor.size()) == 0 else loss_tensor.size(0) - new_loss = torch.mean(loss_tensor.view(bs, -1), dim=1) - loss += new_loss - return loss / len(inputs) - else: - return self.loss(inputs, target_is_real, for_discriminator) diff --git a/spaces/iamironman4279/SadTalker/src/facerender/modules/generator.py b/spaces/iamironman4279/SadTalker/src/facerender/modules/generator.py deleted file mode 100644 index 5a9edcb3b328d3afc99072b2461d7ca69919f813..0000000000000000000000000000000000000000 --- a/spaces/iamironman4279/SadTalker/src/facerender/modules/generator.py +++ /dev/null @@ -1,255 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F -from src.facerender.modules.util import ResBlock2d, SameBlock2d, UpBlock2d, DownBlock2d, ResBlock3d, SPADEResnetBlock -from src.facerender.modules.dense_motion import DenseMotionNetwork - - -class OcclusionAwareGenerator(nn.Module): - """ - Generator follows NVIDIA architecture. - """ - - def __init__(self, image_channel, feature_channel, num_kp, block_expansion, max_features, num_down_blocks, reshape_channel, reshape_depth, - num_resblocks, estimate_occlusion_map=False, dense_motion_params=None, estimate_jacobian=False): - super(OcclusionAwareGenerator, self).__init__() - - if dense_motion_params is not None: - self.dense_motion_network = DenseMotionNetwork(num_kp=num_kp, feature_channel=feature_channel, - estimate_occlusion_map=estimate_occlusion_map, - **dense_motion_params) - else: - self.dense_motion_network = None - - self.first = SameBlock2d(image_channel, block_expansion, kernel_size=(7, 7), padding=(3, 3)) - - down_blocks = [] - for i in range(num_down_blocks): - in_features = min(max_features, block_expansion * (2 ** i)) - out_features = min(max_features, block_expansion * (2 ** (i + 1))) - down_blocks.append(DownBlock2d(in_features, out_features, kernel_size=(3, 3), padding=(1, 1))) - self.down_blocks = nn.ModuleList(down_blocks) - - self.second = nn.Conv2d(in_channels=out_features, out_channels=max_features, kernel_size=1, stride=1) - - self.reshape_channel = reshape_channel - self.reshape_depth = reshape_depth - - self.resblocks_3d = torch.nn.Sequential() - for i in range(num_resblocks): - self.resblocks_3d.add_module('3dr' + str(i), ResBlock3d(reshape_channel, kernel_size=3, padding=1)) - - out_features = block_expansion * (2 ** (num_down_blocks)) - self.third = SameBlock2d(max_features, out_features, kernel_size=(3, 3), padding=(1, 1), lrelu=True) - self.fourth = nn.Conv2d(in_channels=out_features, out_channels=out_features, kernel_size=1, stride=1) - - self.resblocks_2d = torch.nn.Sequential() - for i in range(num_resblocks): - self.resblocks_2d.add_module('2dr' + str(i), ResBlock2d(out_features, kernel_size=3, padding=1)) - - up_blocks = [] - for i in range(num_down_blocks): - in_features = max(block_expansion, block_expansion * (2 ** (num_down_blocks - i))) - out_features = max(block_expansion, block_expansion * (2 ** (num_down_blocks - i - 1))) - up_blocks.append(UpBlock2d(in_features, out_features, kernel_size=(3, 3), padding=(1, 1))) - self.up_blocks = nn.ModuleList(up_blocks) - - self.final = nn.Conv2d(block_expansion, image_channel, kernel_size=(7, 7), padding=(3, 3)) - self.estimate_occlusion_map = estimate_occlusion_map - self.image_channel = image_channel - - def deform_input(self, inp, deformation): - _, d_old, h_old, w_old, _ = deformation.shape - _, _, d, h, w = inp.shape - if d_old != d or h_old != h or w_old != w: - deformation = deformation.permute(0, 4, 1, 2, 3) - deformation = F.interpolate(deformation, size=(d, h, w), mode='trilinear') - deformation = deformation.permute(0, 2, 3, 4, 1) - return F.grid_sample(inp, deformation) - - def forward(self, source_image, kp_driving, kp_source): - # Encoding (downsampling) part - out = self.first(source_image) - for i in range(len(self.down_blocks)): - out = self.down_blocks[i](out) - out = self.second(out) - bs, c, h, w = out.shape - # print(out.shape) - feature_3d = out.view(bs, self.reshape_channel, self.reshape_depth, h ,w) - feature_3d = self.resblocks_3d(feature_3d) - - # Transforming feature representation according to deformation and occlusion - output_dict = {} - if self.dense_motion_network is not None: - dense_motion = self.dense_motion_network(feature=feature_3d, kp_driving=kp_driving, - kp_source=kp_source) - output_dict['mask'] = dense_motion['mask'] - - if 'occlusion_map' in dense_motion: - occlusion_map = dense_motion['occlusion_map'] - output_dict['occlusion_map'] = occlusion_map - else: - occlusion_map = None - deformation = dense_motion['deformation'] - out = self.deform_input(feature_3d, deformation) - - bs, c, d, h, w = out.shape - out = out.view(bs, c*d, h, w) - out = self.third(out) - out = self.fourth(out) - - if occlusion_map is not None: - if out.shape[2] != occlusion_map.shape[2] or out.shape[3] != occlusion_map.shape[3]: - occlusion_map = F.interpolate(occlusion_map, size=out.shape[2:], mode='bilinear') - out = out * occlusion_map - - # output_dict["deformed"] = self.deform_input(source_image, deformation) # 3d deformation cannot deform 2d image - - # Decoding part - out = self.resblocks_2d(out) - for i in range(len(self.up_blocks)): - out = self.up_blocks[i](out) - out = self.final(out) - out = F.sigmoid(out) - - output_dict["prediction"] = out - - return output_dict - - -class SPADEDecoder(nn.Module): - def __init__(self): - super().__init__() - ic = 256 - oc = 64 - norm_G = 'spadespectralinstance' - label_nc = 256 - - self.fc = nn.Conv2d(ic, 2 * ic, 3, padding=1) - self.G_middle_0 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.G_middle_1 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.G_middle_2 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.G_middle_3 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.G_middle_4 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.G_middle_5 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.up_0 = SPADEResnetBlock(2 * ic, ic, norm_G, label_nc) - self.up_1 = SPADEResnetBlock(ic, oc, norm_G, label_nc) - self.conv_img = nn.Conv2d(oc, 3, 3, padding=1) - self.up = nn.Upsample(scale_factor=2) - - def forward(self, feature): - seg = feature - x = self.fc(feature) - x = self.G_middle_0(x, seg) - x = self.G_middle_1(x, seg) - x = self.G_middle_2(x, seg) - x = self.G_middle_3(x, seg) - x = self.G_middle_4(x, seg) - x = self.G_middle_5(x, seg) - x = self.up(x) - x = self.up_0(x, seg) # 256, 128, 128 - x = self.up(x) - x = self.up_1(x, seg) # 64, 256, 256 - - x = self.conv_img(F.leaky_relu(x, 2e-1)) - # x = torch.tanh(x) - x = F.sigmoid(x) - - return x - - -class OcclusionAwareSPADEGenerator(nn.Module): - - def __init__(self, image_channel, feature_channel, num_kp, block_expansion, max_features, num_down_blocks, reshape_channel, reshape_depth, - num_resblocks, estimate_occlusion_map=False, dense_motion_params=None, estimate_jacobian=False): - super(OcclusionAwareSPADEGenerator, self).__init__() - - if dense_motion_params is not None: - self.dense_motion_network = DenseMotionNetwork(num_kp=num_kp, feature_channel=feature_channel, - estimate_occlusion_map=estimate_occlusion_map, - **dense_motion_params) - else: - self.dense_motion_network = None - - self.first = SameBlock2d(image_channel, block_expansion, kernel_size=(3, 3), padding=(1, 1)) - - down_blocks = [] - for i in range(num_down_blocks): - in_features = min(max_features, block_expansion * (2 ** i)) - out_features = min(max_features, block_expansion * (2 ** (i + 1))) - down_blocks.append(DownBlock2d(in_features, out_features, kernel_size=(3, 3), padding=(1, 1))) - self.down_blocks = nn.ModuleList(down_blocks) - - self.second = nn.Conv2d(in_channels=out_features, out_channels=max_features, kernel_size=1, stride=1) - - self.reshape_channel = reshape_channel - self.reshape_depth = reshape_depth - - self.resblocks_3d = torch.nn.Sequential() - for i in range(num_resblocks): - self.resblocks_3d.add_module('3dr' + str(i), ResBlock3d(reshape_channel, kernel_size=3, padding=1)) - - out_features = block_expansion * (2 ** (num_down_blocks)) - self.third = SameBlock2d(max_features, out_features, kernel_size=(3, 3), padding=(1, 1), lrelu=True) - self.fourth = nn.Conv2d(in_channels=out_features, out_channels=out_features, kernel_size=1, stride=1) - - self.estimate_occlusion_map = estimate_occlusion_map - self.image_channel = image_channel - - self.decoder = SPADEDecoder() - - def deform_input(self, inp, deformation): - _, d_old, h_old, w_old, _ = deformation.shape - _, _, d, h, w = inp.shape - if d_old != d or h_old != h or w_old != w: - deformation = deformation.permute(0, 4, 1, 2, 3) - deformation = F.interpolate(deformation, size=(d, h, w), mode='trilinear') - deformation = deformation.permute(0, 2, 3, 4, 1) - return F.grid_sample(inp, deformation) - - def forward(self, source_image, kp_driving, kp_source): - # Encoding (downsampling) part - out = self.first(source_image) - for i in range(len(self.down_blocks)): - out = self.down_blocks[i](out) - out = self.second(out) - bs, c, h, w = out.shape - # print(out.shape) - feature_3d = out.view(bs, self.reshape_channel, self.reshape_depth, h ,w) - feature_3d = self.resblocks_3d(feature_3d) - - # Transforming feature representation according to deformation and occlusion - output_dict = {} - if self.dense_motion_network is not None: - dense_motion = self.dense_motion_network(feature=feature_3d, kp_driving=kp_driving, - kp_source=kp_source) - output_dict['mask'] = dense_motion['mask'] - - # import pdb; pdb.set_trace() - - if 'occlusion_map' in dense_motion: - occlusion_map = dense_motion['occlusion_map'] - output_dict['occlusion_map'] = occlusion_map - else: - occlusion_map = None - deformation = dense_motion['deformation'] - out = self.deform_input(feature_3d, deformation) - - bs, c, d, h, w = out.shape - out = out.view(bs, c*d, h, w) - out = self.third(out) - out = self.fourth(out) - - # occlusion_map = torch.where(occlusion_map < 0.95, 0, occlusion_map) - - if occlusion_map is not None: - if out.shape[2] != occlusion_map.shape[2] or out.shape[3] != occlusion_map.shape[3]: - occlusion_map = F.interpolate(occlusion_map, size=out.shape[2:], mode='bilinear') - out = out * occlusion_map - - # Decoding part - out = self.decoder(out) - - output_dict["prediction"] = out - - return output_dict \ No newline at end of file diff --git a/spaces/iamstolas/STOLAS/src/components/tailwind-indicator.tsx b/spaces/iamstolas/STOLAS/src/components/tailwind-indicator.tsx deleted file mode 100644 index f2a1291213dd67055fcebe67fab574c8441338df..0000000000000000000000000000000000000000 --- a/spaces/iamstolas/STOLAS/src/components/tailwind-indicator.tsx +++ /dev/null @@ -1,14 +0,0 @@ -export function TailwindIndicator() { - if (process.env.NODE_ENV === 'production') return null - - return ( -
-
xs
-
sm
-
md
-
lg
-
xl
-
2xl
-
- ) -} diff --git a/spaces/innovatorved/whisper.api/app/utils/exception.py b/spaces/innovatorved/whisper.api/app/utils/exception.py deleted file mode 100644 index e809fb7e8253c23b7044759a03872988727392b2..0000000000000000000000000000000000000000 --- a/spaces/innovatorved/whisper.api/app/utils/exception.py +++ /dev/null @@ -1,20 +0,0 @@ -import logging - -logger = logging.getLogger(__name__) -from fastapi import HTTPException, status - - -def handle_exceptions(func): - async def wrapper(*args, **kwargs): - try: - return await func(*args, **kwargs) - except HTTPException as exc: - logger.error(exc) - raise exc - except Exception as exc: - logger.error(exc) - raise HTTPException( - status_code=status.HTTP_400_BAD_REQUEST, detail=str(exc) - ) from exc - - return wrapper diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Axifer Billiard Download [UPDATED].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Axifer Billiard Download [UPDATED].md deleted file mode 100644 index d91b493ba555003261fbe05ecfcb2cd67058aa10..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Axifer Billiard Download [UPDATED].md +++ /dev/null @@ -1,14 +0,0 @@ -

axifer billiard download


Download Zip ⚙⚙⚙ https://urlin.us/2uExie



- -References - -Category:Gymnastics - -Category:Gymnastics in ArgentinaThe catheter used for internal infusion of anesthetic in patients during surgery, or other medical procedures, has certain characteristics of simplicity and reliability. In addition, the catheter must be very small in diameter, be flexible, and be suitable for insertion into very small blood vessels. Further, the catheter must withstand the high pressures which are used, so that neither the catheter nor its contents will escape from the blood vessel during use. Furthermore, the catheter must have a minimum of frictional resistance and occlusion during insertion, so that it will be inserted into the blood vessel with as little pressure as possible on the patient's skin. It must be able to withstand the effects of the internal pressures which are used during infusion, while at the same time it must be able to tolerate and even encourage the ingrowth of body fluids, thereby assisting in the prevention of the formation of blood clots on its interior surface. - -Heretofore, catheters having some of the foregoing features have been used for the infusion of anesthetic agents during surgery. Catheters of this type have been characterized by the inclusion of an inflatable reservoir and an independent fluid conduit. The reservoir is inflated by a separate, direct connection to an external source of high pressure fluid. The inflatable reservoir is made of a resilient, flexible material, so that it can be easily inserted into the blood vessel. The reservoir is sized and shaped so that it will collapse on itself and conform to the interior shape of the blood vessel, so that it will not be restricted in its flow of fluid out of the vessel during use. As the fluid from the reservoir is exhausted, the reservoir is refilled from the source of high pressure fluid. The inflatable reservoir is generally connected to a thin-walled tube which is adapted to serve as a conduit for the independent flow of anesthetic agent to the blood vessel. - -The independent flow of fluid through the tube is provided by a needle-free connection, usually by a "T"-fitting which is connected to the conduit at its downstream end, and to a small diameter tube which is connected to the tube at its upstream end. The T-fitting is connected to the conduit at its downstream end by a filter, which prevents particulate matter and large size fluid droplets from entering the conduit and causing an obstruction to fluid flow. The upstream end of the tube is positioned to be inserted through a small opening in the patient 4fefd39f24
-
-
-

diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Data Structures Book By Revathi Poonguzhali Free PATCHED Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Data Structures Book By Revathi Poonguzhali Free PATCHED Download.md deleted file mode 100644 index 573600c3f2377d003b8767bbef8838630eff1213..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Data Structures Book By Revathi Poonguzhali Free PATCHED Download.md +++ /dev/null @@ -1,24 +0,0 @@ - -

Data Structures: A Comprehensive Guide by Revathi Poonguzhali

-

If you are looking for a book that covers all the essential topics of data structures in a clear and concise way, then Data Structures: A Comprehensive Guide by Revathi Poonguzhali is the book for you. This book is written by P.Revathi and S.Poonguzhali, who are experienced teachers and authors in the field of computer science and engineering. The book is designed for students of information technology, computer science and engineering, and other related disciplines.

-

Data Structures: A Comprehensive Guide by Revathi Poonguzhali covers the basics of data structures such as arrays, stacks, queues, linked lists, trees, graphs, hashing, and sorting. It also introduces advanced topics such as dynamic programming, greedy algorithms, backtracking, divide and conquer, and algorithm analysis. The book provides numerous examples, diagrams, exercises, and solutions to help the readers understand and apply the concepts. The book also follows the syllabus of Anna University and other universities in India.

-

Data Structures Book By Revathi Poonguzhali Free Download


Download Zip ::: https://urlin.us/2uExqt



-

Data Structures: A Comprehensive Guide by Revathi Poonguzhali is available for free download from various online sources[^1^]. You can also buy the paperback version from Amazon[^2^]. If you want to learn data structures in a simple and effective way, then this book is a must-read for you.

In this article, we will review some of the key features and benefits of Data Structures: A Comprehensive Guide by Revathi Poonguzhali. We will also provide some feedback from the readers who have used this book for learning data structures.

-

Key Features and Benefits

-

Data Structures: A Comprehensive Guide by Revathi Poonguzhali has the following features and benefits:

-
    -
  • It covers all the fundamental and advanced topics of data structures in a systematic and logical way.
  • -
  • It uses simple and easy-to-understand language and terminology.
  • -
  • It provides clear and detailed explanations of the concepts and algorithms with the help of examples and diagrams.
  • -
  • It includes a variety of exercises and solutions at the end of each chapter to test the understanding and application of the topics.
  • -
  • It follows the latest syllabus and guidelines of Anna University and other universities in India.
  • -
  • It is suitable for both beginners and advanced learners of data structures.
  • -
-

Feedback from Readers

-

Data Structures: A Comprehensive Guide by Revathi Poonguzhali has received positive feedback from the readers who have used this book for learning data structures. Here are some of the comments from the readers:

-

-
"This book is very helpful for learning data structures. It covers all the topics in a simple and clear way. The examples and diagrams are very useful for understanding the concepts. The exercises and solutions are also very helpful for practicing the problems. I recommend this book to anyone who wants to learn data structures."
-
"This book is one of the best books on data structures. It is very well-written and organized. It explains the topics in a logical and coherent way. The examples and diagrams are very illustrative and informative. The exercises and solutions are also very comprehensive and challenging. I learned a lot from this book."
-
"This book is a must-read for anyone who wants to learn data structures. It covers all the essential and advanced topics in a concise and effective way. The examples and diagrams are very relevant and practical. The exercises and solutions are also very useful for revising the topics. This book helped me a lot in my studies."

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Delicious Retouch Panel 4.1.3.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Delicious Retouch Panel 4.1.3.md deleted file mode 100644 index cdb91e19bf26c64f8a90930d18f03ef4e58b7b54..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Delicious Retouch Panel 4.1.3.md +++ /dev/null @@ -1,6 +0,0 @@ -

Delicious Retouch Panel 4.1.3


Download File ››› https://urlin.us/2uEwZj



-
-Awesome Skin Smoothing and Skin Retouching Te... Добавлено: 8 мес. Добавил: Creative Instructor · Delicious Retouch ... 4d29de3e1b
-
-
-

diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Eltima.Serial.Port.Monitor.4.0.2.281.keygen SND.zip.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Eltima.Serial.Port.Monitor.4.0.2.281.keygen SND.zip.md deleted file mode 100644 index d0fd4b275f21d97b0fb4e3178fe4e208592df8d1..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Eltima.Serial.Port.Monitor.4.0.2.281.keygen SND.zip.md +++ /dev/null @@ -1,6 +0,0 @@ -

Eltima.Serial.Port.Monitor.4.0.2.281.keygen SND.zip


Download File > https://urlin.us/2uEwU2



-
-Driver Booster PRO 5.1.0 With Serial Key + Crack Is Here . ... Eltima.Serial.Port.Monitor.4.0.2.281.keygen SND.zip · singam 2 full movie blu-ray ... 1fdad05405
-
-
-

diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (movie Magic Screenwriter Mac Crack T) TOP.md b/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (movie Magic Screenwriter Mac Crack T) TOP.md deleted file mode 100644 index ef80a397036f57e530fe63d88e526fb743f62977..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/HD Online Player (movie Magic Screenwriter Mac Crack T) TOP.md +++ /dev/null @@ -1,6 +0,0 @@ -

HD Online Player (movie magic screenwriter mac crack t)


Download Ziphttps://urlin.us/2uEwuF



-
-After installing the movie magic scheduling 5 mac serial number, select activate. ... Movie Magic Scheduling 6 Crack Full Patch Free Download [Incl-Codes]. ... I'm trying to paste my Activation ID into Movie Magic and I can't find the ... Execute the following command in the terminal to set the Full HD resolution:. 1fdad05405
-
-
-

diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Linqer 4 Activation Key.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Linqer 4 Activation Key.md deleted file mode 100644 index 6cad02bdca607475067f0aed14802f35aaca57ba..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Linqer 4 Activation Key.md +++ /dev/null @@ -1,6 +0,0 @@ -

Linqer 4 Activation Key


Downloadhttps://urlin.us/2uEyFD



-
-744019 Serial Key Keygen. 9/01/2019 ... CRACK.. Veritas System Recovery 18.0.0.56426 Crack Serial Key. ... Linqer 4 Activation Key - · Crack. 1fdad05405
-
-
-

diff --git a/spaces/inreVtussa/clothingai/Examples/Agilent VEE Pro 93 Crackrar.md b/spaces/inreVtussa/clothingai/Examples/Agilent VEE Pro 93 Crackrar.md deleted file mode 100644 index 00597e88833bb4a1af05120bd039c786c0015784..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Agilent VEE Pro 93 Crackrar.md +++ /dev/null @@ -1,126 +0,0 @@ - -

Agilent VEE Pro 93 Crackrar: A Visual Engineering Environment for Measurement and Analysis

- -

If you are looking for a graphical language software that can help you with your measurement and analysis tasks, you might want to check out Agilent VEE Pro 93 Crackrar. This software is a visual engineering environment that allows you to create and execute programs without writing code. You can use Agilent VEE Pro 93 Crackrar to interface with various instruments, such as oscilloscopes, multimeters, signal generators, and more. You can also use it to perform data acquisition, processing, visualization, and reporting.

-

Agilent VEE Pro 93 Crackrar


DOWNLOAD » https://tiurll.com/2uCjCC



- -

What are the features of Agilent VEE Pro 93 Crackrar?

- -

Agilent VEE Pro 93 Crackrar has many features that make it a powerful and user-friendly software for measurement and analysis. Some of these features are:

- -
    -
  • It supports multiple platforms, such as Windows, Linux, and Mac OS X.
  • -
  • It has a drag-and-drop interface that lets you create programs by connecting objects on a canvas.
  • -
  • It has a library of built-in functions and objects that cover various domains, such as math, statistics, logic, communication, and more.
  • -
  • It has a dataflow execution model that ensures the correct order of operations and data transfer.
  • -
  • It has a debugging tool that lets you monitor and modify the values of variables and objects during runtime.
  • -
  • It has a data display tool that lets you view and manipulate data in various formats, such as graphs, tables, meters, gauges, and more.
  • -
  • It has a report generation tool that lets you create and export reports in various formats, such as HTML, PDF, Excel, Word, and more.
  • -
  • It has an instrument manager tool that lets you configure and control instruments from various vendors and protocols.
  • -
  • It has an ActiveX/COM integration tool that lets you use ActiveX/COM components in your programs.
  • -
  • It has a MATLAB integration tool that lets you use MATLAB functions and scripts in your programs.
  • -
- -

How to download and install Agilent VEE Pro 93 Crackrar?

- -

If you want to download and install Agilent VEE Pro 93 Crackrar for free, you can follow these steps:

- -
    -
  1. Go to this link: https://desotzplanbuli.files.wordpress.com/2020/03/agilent-vee-pro-93-crackrar.pdf
  2. -
  3. Download the file Agilent VEE Pro 9.3 Crack.rar
  4. -
  5. Extract the file using WinRAR or any other software that can handle .rar files
  6. -
  7. Run the setup.exe file and follow the instructions
  8. -
  9. Copy the crack file from the crack folder and paste it into the installation folder
  10. -
  11. Enjoy using Agilent VEE Pro 93 Crackrar for free
  12. -
- -

Conclusion

- -

Agilent VEE Pro 93 Crackrar is a visual engineering environment that can help you with your measurement and analysis tasks. It has many features that make it a powerful and user-friendly software. You can download and install Agilent VEE Pro 93 Crackrar for free by following the steps above. However, if you want to support the developers and get the latest updates and support, you should buy the original software from the official website: https://www.keysight.com/en/pd-1000003186%3Aepsg%3Apro/vee-pro?cc=US&lc=eng

-

How to use Agilent VEE Pro 93 Crackrar for your projects?

- -

Using Agilent VEE Pro 93 Crackrar for your projects is easy and fun. You can create programs by dragging and dropping objects on a canvas and connecting them with wires. You can also customize the properties and behavior of each object by double-clicking on it. You can run your programs by clicking on the Run button or pressing F5. You can also save your programs as .vee files or export them as executable files.

-

- -

Here are some examples of what you can do with Agilent VEE Pro 93 Crackrar:

- -
    -
  • You can create a program that measures the voltage and current of a circuit using a digital multimeter and displays the results on a graph.
  • -
  • You can create a program that generates a sine wave signal using a signal generator and captures the waveform using an oscilloscope.
  • -
  • You can create a program that performs a frequency analysis on a sound file using a spectrum analyzer and plots the spectrum on a chart.
  • -
  • You can create a program that controls a robotic arm using a serial port and sends commands based on user input.
  • -
  • You can create a program that reads data from a temperature sensor using a GPIB interface and logs the data to a file.
  • -
- -

What are the benefits of Agilent VEE Pro 93 Crackrar?

- -

Agilent VEE Pro 93 Crackrar has many benefits that make it a great software for measurement and analysis. Some of these benefits are:

- -
    -
  • It is easy to learn and use, even for beginners and non-programmers.
  • -
  • It is flexible and versatile, as it can work with various instruments, data types, and formats.
  • -
  • It is fast and efficient, as it can execute programs in parallel and optimize performance.
  • -
  • It is reliable and accurate, as it can handle errors and exceptions gracefully.
  • -
  • It is compatible and interoperable, as it can integrate with other software and components.
  • -
- -

Where to get more information about Agilent VEE Pro 93 Crackrar?

- -

If you want to get more information about Agilent VEE Pro 93 Crackrar, you can visit the following sources:

- -
    -
  • The official website of Keysight Technologies, the developer of Agilent VEE Pro 93 Crackrar: https://www.keysight.com/en/pd-1000003186%3Aepsg%3Apro/vee-pro?cc=US&lc=eng
  • -
  • The user manual of Agilent VEE Pro 93 Crackrar: https://literature.cdn.keysight.com/litweb/pdf/5990-5326EN.pdf
  • -
  • The online help file of Agilent VEE Pro 93 Crackrar: https://literature.cdn.keysight.com/litweb/pdf/5990-5327EN.pdf
  • -
  • The online forum of Agilent VEE Pro 93 Crackrar: https://community.keysight.com/community/keysight-blogs/eesof-eda/tags/vee
  • -
- -

Conclusion

- -

In conclusion, Agilent VEE Pro 93 Crackrar is a visual engineering environment that can help you with your measurement and analysis tasks. It has many features, benefits, and resources that make it a powerful and user-friendly software. You can download and install Agilent VEE Pro 93 Crackrar for free by following the steps in the previous section. However, if you want to support the developers and get the latest updates and support, you should buy the original software from the official website: https://www.keysight.com/en/pd-1000003186%3Aepsg%3Apro/vee-pro?cc=US&lc=eng

-

How to learn Agilent VEE Pro 93 Crackrar?

- -

If you want to learn Agilent VEE Pro 93 Crackrar, you can use various resources and methods that can help you master the software. Some of these resources and methods are:

- -
    -
  • The tutorial of Agilent VEE Pro 93 Crackrar: https://literature.cdn.keysight.com/litweb/pdf/5990-5328EN.pdf. This tutorial provides a step-by-step guide on how to use the basic features and functions of Agilent VEE Pro 93 Crackrar. You can follow the tutorial and practice with the sample programs and exercises.
  • -
  • The examples of Agilent VEE Pro 93 Crackrar: https://www.keysight.com/en/pd-1000003186%3Aepsg%3Apro/vee-pro?cc=US&lc=eng#examples. These examples show you how to use Agilent VEE Pro 93 Crackrar for various applications and scenarios. You can download and run the examples and study how they work.
  • -
  • The courses of Agilent VEE Pro 93 Crackrar: https://www.keysight.com/en/pc-1000003186%3Aepsg%3Apro/vee-pro?cc=US&lc=eng#courses. These courses are designed to teach you how to use Agilent VEE Pro 93 Crackrar for advanced topics and techniques. You can enroll in these courses and learn from the instructors and experts.
  • -
  • The books of Agilent VEE Pro 93 Crackrar: https://www.amazon.com/s?k=agilent+vee+pro&i=stripbooks-intl-ship&ref=nb_sb_noss. These books are written by authors who have experience and knowledge in using Agilent VEE Pro 93 Crackrar. You can read these books and learn from their insights and tips.
  • -
  • The online communities of Agilent VEE Pro 93 Crackrar: https://community.keysight.com/community/keysight-blogs/eesof-eda/tags/vee. These online communities are platforms where you can interact with other users and enthusiasts of Agilent VEE Pro 93 Crackrar. You can ask questions, share ideas, get feedback, and find solutions.
  • -
- -

How to compare Agilent VEE Pro 93 Crackrar with other software?

- -

If you want to compare Agilent VEE Pro 93 Crackrar with other software that can perform similar tasks, you can use various criteria and factors that can help you evaluate them. Some of these criteria and factors are:

- -
    -
  • The price: How much does the software cost? Is it affordable or expensive? Does it offer a free trial or a discount?
  • -
  • The features: What are the features and functions of the software? Are they sufficient or lacking? Are they easy or difficult to use?
  • -
  • The performance: How fast and efficient is the software? Does it run smoothly or crash frequently? Does it consume a lot of resources or memory?
  • -
  • The compatibility: How compatible is the software with different platforms, instruments, data types, and formats? Does it support or integrate with other software and components?
  • -
  • The support: How good is the support and service of the software? Does it provide technical assistance, product information, software updates, and more? Does it have a user manual, an online help file, a forum, or a community?
  • -
- -

Some examples of other software that can perform similar tasks as Agilent VEE Pro 93 Crackrar are:

- -
    -
  • LabVIEW: LabVIEW is a graphical programming environment that can help you create programs for measurement, automation, and analysis. It is developed by National Instruments. You can find more information about LabVIEW here: https://www.ni.com/en-us/shop/labview.html
  • -
  • Matlab: Matlab is a numerical computing environment that can help you perform mathematical operations, data analysis, visualization, and programming. It is developed by MathWorks. You can find more information about Matlab here: https://www.mathworks.com/products/matlab.html
  • -
  • Python: Python is a general-purpose programming language that can help you perform various tasks, such as data science, web development, machine learning, and more. It is an open-source project. You can find more information about Python here: https://www.python.org/
  • -
- -

Conclusion

- -

In conclusion, Agilent VEE Pro 93 Crackrar is a visual engineering environment that can help you with your measurement and analysis tasks. It has many features, benefits, and resources that make it a powerful and user-friendly software. You can download and install Agilent VEE Pro 93 Crackrar for free by following the steps in the previous sections. However, if you want to support the developers and get the latest updates and support, you should buy the original software from the official website: https://www.keysight.com/en/pd-1000003186%3Aepsg%3Apro/vee-pro?cc=US&lc=eng

-

Conclusion

- -

In this article, you have learned about Agilent VEE Pro 93 Crackrar, a visual engineering environment that can help you with your measurement and analysis tasks. You have learned how to download and install Agilent VEE Pro 93 Crackrar for free, how to use it for various projects, what are the benefits of using it, where to get more information and support for it, how to troubleshoot it, how to learn it, and how to compare it with other software. You have also seen some examples of programs and applications that you can create with Agilent VEE Pro 93 Crackrar.

- -

Agilent VEE Pro 93 Crackrar is a powerful and user-friendly software that can make your work easier and faster. It can work with various instruments, data types, and formats, and it can integrate with other software and components. It can also provide you with technical assistance, product information, software updates, and more.

- -

If you want to try Agilent VEE Pro 93 Crackrar for yourself, you can download and install it for free by following the steps in the previous sections. However, if you want to support the developers and get the latest updates and support, you should buy the original software from the official website: https://www.keysight.com/en/pd-1000003186%3Aepsg%3Apro/vee-pro?cc=US&lc=eng

- -

Thank you for reading this article. We hope you have found it useful and informative. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/james-oldfield/PandA/networks/stylegan3/metrics/kernel_inception_distance.py b/spaces/james-oldfield/PandA/networks/stylegan3/metrics/kernel_inception_distance.py deleted file mode 100644 index d69325c1ef4e2894817ef6003e9335c4de657199..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/stylegan3/metrics/kernel_inception_distance.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Kernel Inception Distance (KID) from the paper "Demystifying MMD -GANs". Matches the original implementation by Binkowski et al. at -https://github.com/mbinkowski/MMD-GAN/blob/master/gan/compute_scores.py""" - -import numpy as np -from . import metric_utils - -#---------------------------------------------------------------------------- - -def compute_kid(opts, max_real, num_gen, num_subsets, max_subset_size): - # Direct TorchScript translation of http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz - detector_url = 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/metrics/inception-2015-12-05.pkl' - detector_kwargs = dict(return_features=True) # Return raw features before the softmax layer. - - real_features = metric_utils.compute_feature_stats_for_dataset( - opts=opts, detector_url=detector_url, detector_kwargs=detector_kwargs, - rel_lo=0, rel_hi=0, capture_all=True, max_items=max_real).get_all() - - gen_features = metric_utils.compute_feature_stats_for_generator( - opts=opts, detector_url=detector_url, detector_kwargs=detector_kwargs, - rel_lo=0, rel_hi=1, capture_all=True, max_items=num_gen).get_all() - - if opts.rank != 0: - return float('nan') - - n = real_features.shape[1] - m = min(min(real_features.shape[0], gen_features.shape[0]), max_subset_size) - t = 0 - for _subset_idx in range(num_subsets): - x = gen_features[np.random.choice(gen_features.shape[0], m, replace=False)] - y = real_features[np.random.choice(real_features.shape[0], m, replace=False)] - a = (x @ x.T / n + 1) ** 3 + (y @ y.T / n + 1) ** 3 - b = (x @ y.T / n + 1) ** 3 - t += (a.sum() - np.diag(a).sum()) / (m - 1) - b.sum() * 2 / m - kid = t / num_subsets / m - return float(kid) - -#---------------------------------------------------------------------------- diff --git a/spaces/jbetker/tortoise/utils/typical_sampling.py b/spaces/jbetker/tortoise/utils/typical_sampling.py deleted file mode 100644 index ff6bf487947e88a55fa45f2ffec1b9540df1d4fd..0000000000000000000000000000000000000000 --- a/spaces/jbetker/tortoise/utils/typical_sampling.py +++ /dev/null @@ -1,33 +0,0 @@ -import torch -from transformers import LogitsWarper - - -class TypicalLogitsWarper(LogitsWarper): - def __init__(self, mass: float = 0.9, filter_value: float = -float("Inf"), min_tokens_to_keep: int = 1): - self.filter_value = filter_value - self.mass = mass - self.min_tokens_to_keep = min_tokens_to_keep - - def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor: - # calculate entropy - normalized = torch.nn.functional.log_softmax(scores, dim=-1) - p = torch.exp(normalized) - ent = -(normalized * p).nansum(-1, keepdim=True) - - # shift and sort - shifted_scores = torch.abs((-normalized) - ent) - sorted_scores, sorted_indices = torch.sort(shifted_scores, descending=False) - sorted_logits = scores.gather(-1, sorted_indices) - cumulative_probs = sorted_logits.softmax(dim=-1).cumsum(dim=-1) - - # Remove tokens with cumulative mass above the threshold - last_ind = (cumulative_probs < self.mass).sum(dim=1) - last_ind[last_ind < 0] = 0 - sorted_indices_to_remove = sorted_scores > sorted_scores.gather(1, last_ind.view(-1, 1)) - if self.min_tokens_to_keep > 1: - # Keep at least min_tokens_to_keep (set to min_tokens_to_keep-1 because we add the first one below) - sorted_indices_to_remove[..., : self.min_tokens_to_keep] = 0 - indices_to_remove = sorted_indices_to_remove.scatter(1, sorted_indices, sorted_indices_to_remove) - - scores = scores.masked_fill(indices_to_remove, self.filter_value) - return scores \ No newline at end of file diff --git a/spaces/jbilcke-hf/VideoChain-UI/src/components/ui/accordion.tsx b/spaces/jbilcke-hf/VideoChain-UI/src/components/ui/accordion.tsx deleted file mode 100644 index 937620af27e5d8ef577f0baca229a9b753ebd017..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/VideoChain-UI/src/components/ui/accordion.tsx +++ /dev/null @@ -1,60 +0,0 @@ -"use client" - -import * as React from "react" -import * as AccordionPrimitive from "@radix-ui/react-accordion" -import { ChevronDown } from "lucide-react" - -import { cn } from "@/lib/utils" - -const Accordion = AccordionPrimitive.Root - -const AccordionItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -AccordionItem.displayName = "AccordionItem" - -const AccordionTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - svg]:rotate-180", - className - )} - {...props} - > - {children} - - - -)) -AccordionTrigger.displayName = AccordionPrimitive.Trigger.displayName - -const AccordionContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - -
{children}
-
-)) -AccordionContent.displayName = AccordionPrimitive.Content.displayName - -export { Accordion, AccordionItem, AccordionTrigger, AccordionContent } diff --git a/spaces/jbilcke-hf/hotshot-xl-api/download-model.sh b/spaces/jbilcke-hf/hotshot-xl-api/download-model.sh deleted file mode 100644 index 9d91d45829e69d087254767b674ed973115e3433..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/hotshot-xl-api/download-model.sh +++ /dev/null @@ -1,48 +0,0 @@ -# we do this at runtime since the model is too large -modelsDir="/data/models" -sdxlModelName="stable-diffusion-xl-base-1.0" - -echo "sanity check: listing for files and folders in the current working dir" -ls -la - -echo "sanity check: listing for files and folders in the Hotshot-XL dir" -cd Hotshot-XL -ls -la -cd .. - -mkdir -p $modelsDir -cd $modelsDir - -if [ ! -d "$modelsDir/$sdxlModelName" ] -then - # make sure we have LFS capability - # git lfs install - - # clone the repo (note: it is huge) - # git clone https://huggingface.co/stabilityai/$modelName - - # edit: actually let's not download the huge git repo of SDXL - # but only the unet - mkdir -p $sdxlModelName - cd $sdxlModelName - wget https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/unet/diffusion_pytorch_model.safetensors - cd .. -else - echo "$modelsDir/$sdxlModelName already exists" -fi - -# we don't need to downlaod this as this is already in the Dockerfile -# if [ ! -d "$modelsDir/hotshotco" ] -# then -# # make sure we have LFS capability -# # git lfs install -# -# mkdir -p hotshotco -# cd hotshotco -# git clone https://huggingface.co/hotshotco/Hotshot-XL -# cd .. -# else -# echo "$modelsDir/$hotshotco already exists" -# fi - -cd .. diff --git a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/transformer/transformer.py b/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/transformer/transformer.py deleted file mode 100644 index 76d1003b3852ce72c6ad5c3c23705f380197362f..0000000000000000000000000000000000000000 --- a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/modeling/transformer/transformer.py +++ /dev/null @@ -1,380 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/transformer.py -# Copyright (c) Meta Platforms, Inc. All Rights Reserved - -""" -Transformer class. - -Copy-paste from torch.nn.Transformer with modifications: - * positional encodings are passed in MHattention - * extra LN at the end of encoder is removed - * decoder returns a stack of activations from all decoding layers -""" -import copy -from typing import List, Optional - -import torch -import torch.nn.functional as F -from torch import Tensor, nn - - -class Transformer(nn.Module): - def __init__( - self, - d_model=512, - nhead=8, - num_encoder_layers=6, - num_decoder_layers=6, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - return_intermediate_dec=False, - ): - super().__init__() - - encoder_layer = TransformerEncoderLayer( - d_model, nhead, dim_feedforward, dropout, activation, normalize_before - ) - encoder_norm = nn.LayerNorm(d_model) if normalize_before else None - self.encoder = TransformerEncoder( - encoder_layer, num_encoder_layers, encoder_norm - ) - - decoder_layer = TransformerDecoderLayer( - d_model, nhead, dim_feedforward, dropout, activation, normalize_before - ) - decoder_norm = nn.LayerNorm(d_model) - self.decoder = TransformerDecoder( - decoder_layer, - num_decoder_layers, - decoder_norm, - return_intermediate=return_intermediate_dec, - ) - - self._reset_parameters() - - self.d_model = d_model - self.nhead = nhead - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def forward(self, src, mask, query_embed, pos_embed): - # flatten NxCxHxW to HWxNxC - bs, c, h, w = src.shape - src = src.flatten(2).permute(2, 0, 1) - pos_embed = pos_embed.flatten(2).permute(2, 0, 1) - query_embed = query_embed.unsqueeze(1).repeat(1, bs, 1) - if mask is not None: - mask = mask.flatten(1) - - tgt = torch.zeros_like(query_embed) - memory = self.encoder(src, src_key_padding_mask=mask, pos=pos_embed) - hs = self.decoder( - tgt, - memory, - memory_key_padding_mask=mask, - pos=pos_embed, - query_pos=query_embed, - ) - return hs.transpose(1, 2), memory.permute(1, 2, 0).view(bs, c, h, w) - - -class TransformerEncoder(nn.Module): - def __init__(self, encoder_layer, num_layers, norm=None): - super().__init__() - self.layers = _get_clones(encoder_layer, num_layers) - self.num_layers = num_layers - self.norm = norm - - def forward( - self, - src, - mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - output = src - - for layer in self.layers: - output = layer( - output, - src_mask=mask, - src_key_padding_mask=src_key_padding_mask, - pos=pos, - ) - - if self.norm is not None: - output = self.norm(output) - - return output - - -class TransformerDecoder(nn.Module): - def __init__(self, decoder_layer, num_layers, norm=None, return_intermediate=False): - super().__init__() - self.layers = _get_clones(decoder_layer, num_layers) - self.num_layers = num_layers - self.norm = norm - self.return_intermediate = return_intermediate - - def forward( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None, - ): - output = tgt - - intermediate = [] - - for layer in self.layers: - output = layer( - output, - memory, - tgt_mask=tgt_mask, - memory_mask=memory_mask, - tgt_key_padding_mask=tgt_key_padding_mask, - memory_key_padding_mask=memory_key_padding_mask, - pos=pos, - query_pos=query_pos, - ) - if self.return_intermediate: - intermediate.append(self.norm(output)) - - if self.norm is not None: - output = self.norm(output) - if self.return_intermediate: - intermediate.pop() - intermediate.append(output) - - if self.return_intermediate: - return torch.stack(intermediate) - - return output.unsqueeze(0) - - -class TransformerEncoderLayer(nn.Module): - def __init__( - self, - d_model, - nhead, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - ): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm1 = nn.LayerNorm(d_model) - self.norm2 = nn.LayerNorm(d_model) - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post( - self, - src, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - q = k = self.with_pos_embed(src, pos) - src2 = self.self_attn( - q, k, value=src, attn_mask=src_mask, key_padding_mask=src_key_padding_mask - )[0] - src = src + self.dropout1(src2) - src = self.norm1(src) - src2 = self.linear2(self.dropout(self.activation(self.linear1(src)))) - src = src + self.dropout2(src2) - src = self.norm2(src) - return src - - def forward_pre( - self, - src, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - src2 = self.norm1(src) - q = k = self.with_pos_embed(src2, pos) - src2 = self.self_attn( - q, k, value=src2, attn_mask=src_mask, key_padding_mask=src_key_padding_mask - )[0] - src = src + self.dropout1(src2) - src2 = self.norm2(src) - src2 = self.linear2(self.dropout(self.activation(self.linear1(src2)))) - src = src + self.dropout2(src2) - return src - - def forward( - self, - src, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - if self.normalize_before: - return self.forward_pre(src, src_mask, src_key_padding_mask, pos) - return self.forward_post(src, src_mask, src_key_padding_mask, pos) - - -class TransformerDecoderLayer(nn.Module): - def __init__( - self, - d_model, - nhead, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - ): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm1 = nn.LayerNorm(d_model) - self.norm2 = nn.LayerNorm(d_model) - self.norm3 = nn.LayerNorm(d_model) - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - self.dropout3 = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None, - ): - q = k = self.with_pos_embed(tgt, query_pos) - tgt2 = self.self_attn( - q, k, value=tgt, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask - )[0] - tgt = tgt + self.dropout1(tgt2) - tgt = self.norm1(tgt) - tgt2 = self.multihead_attn( - query=self.with_pos_embed(tgt, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory, - attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask, - )[0] - tgt = tgt + self.dropout2(tgt2) - tgt = self.norm2(tgt) - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt)))) - tgt = tgt + self.dropout3(tgt2) - tgt = self.norm3(tgt) - return tgt - - def forward_pre( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None, - ): - tgt2 = self.norm1(tgt) - q = k = self.with_pos_embed(tgt2, query_pos) - tgt2 = self.self_attn( - q, k, value=tgt2, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask - )[0] - tgt = tgt + self.dropout1(tgt2) - tgt2 = self.norm2(tgt) - tgt2 = self.multihead_attn( - query=self.with_pos_embed(tgt2, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory, - attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask, - )[0] - tgt = tgt + self.dropout2(tgt2) - tgt2 = self.norm3(tgt) - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2)))) - tgt = tgt + self.dropout3(tgt2) - return tgt - - def forward( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None, - ): - if self.normalize_before: - return self.forward_pre( - tgt, - memory, - tgt_mask, - memory_mask, - tgt_key_padding_mask, - memory_key_padding_mask, - pos, - query_pos, - ) - return self.forward_post( - tgt, - memory, - tgt_mask, - memory_mask, - tgt_key_padding_mask, - memory_key_padding_mask, - pos, - query_pos, - ) - - -def _get_clones(module, N): - return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) - - -def _get_activation_fn(activation): - """Return an activation function given a string""" - if activation == "relu": - return F.relu - if activation == "gelu": - return F.gelu - if activation == "glu": - return F.glu - raise RuntimeError(f"activation should be relu/gelu, not {activation}.") diff --git a/spaces/jganzabalseenka/NER-spanish/README.md b/spaces/jganzabalseenka/NER-spanish/README.md deleted file mode 100644 index 05ccf4defc7eba95a30daf49d31ac495d0f1c0b1..0000000000000000000000000000000000000000 --- a/spaces/jganzabalseenka/NER-spanish/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: NER Spanish -emoji: 🐢 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jinmao/2/app.py b/spaces/jinmao/2/app.py deleted file mode 100644 index df0e732a45cfa4fb44fc8cff5779a89c440b4481..0000000000000000000000000000000000000000 --- a/spaces/jinmao/2/app.py +++ /dev/null @@ -1,447 +0,0 @@ -# -*- coding:utf-8 -*- -import os -import logging -import sys - -import gradio as gr - -from modules.utils import * -from modules.presets import * -from modules.overwrites import * -from modules.chat_func import * -from modules.openai_func import get_usage - -logging.basicConfig( - level=logging.DEBUG, - format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s", -) - -my_api_key = "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" # 在这里输入你的 API 密钥 - -# if we are running in Docker -if os.environ.get("dockerrun") == "yes": - dockerflag = True -else: - dockerflag = False - -authflag = False - -if not my_api_key: - my_api_key = os.environ.get("my_api_key") -if dockerflag: - if my_api_key == "empty": - logging.error("Please give a api key!") - sys.exit(1) - # auth - username = os.environ.get("USERNAME") - password = os.environ.get("PASSWORD") - if not (isinstance(username, type(None)) or isinstance(password, type(None))): - authflag = True -else: - if ( - not my_api_key - and os.path.exists("api_key.txt") - and os.path.getsize("api_key.txt") - ): - with open("api_key.txt", "r") as f: - my_api_key = f.read().strip() - if os.path.exists("auth.json"): - with open("auth.json", "r", encoding='utf-8') as f: - auth = json.load(f) - username = auth["username"] - password = auth["password"] - if username != "" and password != "": - authflag = True - -gr.Chatbot.postprocess = postprocess -PromptHelper.compact_text_chunks = compact_text_chunks - -with open("assets/custom.css", "r", encoding="utf-8") as f: - customCSS = f.read() - -with gr.Blocks(css=customCSS, theme=small_and_beautiful_theme) as demo: - history = gr.State([]) - token_count = gr.State([]) - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - user_api_key = gr.State(my_api_key) - user_question = gr.State("") - outputing = gr.State(False) - topic = gr.State("未命名对话历史记录") - - with gr.Row(): - with gr.Column(scale=1): - gr.HTML(title) - with gr.Column(scale=4): - gr.HTML('
欢迎体验ChatGPT,我用川虎大神的搭了一个,默认我自己的apikey,兄弟们悠着点用
') - with gr.Column(scale=4): - status_display = gr.Markdown(get_geoip(), elem_id="status_display") - - with gr.Row(scale=1).style(equal_height=True): - with gr.Column(scale=5): - with gr.Row(scale=1): - chatbot = gr.Chatbot(elem_id="chuanhu_chatbot").style(height="100%") - with gr.Row(scale=1): - with gr.Column(scale=12): - user_input = gr.Textbox( - show_label=False, placeholder="在这里输入", interactive=True - ).style(container=False) - with gr.Column(min_width=70, scale=1): - submitBtn = gr.Button("发送", variant="primary") - cancelBtn = gr.Button("取消", variant="secondary", visible=False) - with gr.Row(scale=1): - emptyBtn = gr.Button( - "🧹 新的对话", - ) - retryBtn = gr.Button("🔄 重新生成") - delFirstBtn = gr.Button("🗑️ 删除最旧对话") - delLastBtn = gr.Button("🗑️ 删除最新对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - - with gr.Column(): - with gr.Column(min_width=50, scale=1): - with gr.Tab(label="ChatGPT"): - keyTxt = gr.Textbox( - show_label=True, - placeholder=f"OpenAI API-key...", - value=hide_middle_chars(my_api_key), - type="password", - visible=not HIDE_MY_KEY, - label="API-Key", - ) - usageTxt = gr.Markdown(get_usage(my_api_key), elem_id="usage_display") - model_select_dropdown = gr.Dropdown( - label="选择模型", choices=MODELS, multiselect=False, value=MODELS[0] - ) - use_streaming_checkbox = gr.Checkbox( - label="实时传输回答", value=True, visible=enable_streaming_option - ) - use_websearch_checkbox = gr.Checkbox(label="使用在线搜索", value=False) - language_select_dropdown = gr.Dropdown( - label="选择回复语言(针对搜索&索引功能)", - choices=REPLY_LANGUAGES, - multiselect=False, - value=REPLY_LANGUAGES[0], - ) - index_files = gr.Files(label="上传索引文件", type="file", multiple=True) - - with gr.Tab(label="Prompt"): - systemPromptTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入System Prompt...", - label="System prompt", - value=initial_prompt, - lines=10, - ).style(container=False) - with gr.Accordion(label="加载Prompt模板", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown( - label="选择Prompt模板集合文件", - choices=get_template_names(plain=True), - multiselect=False, - value=get_template_names(plain=True)[0], - ).style(container=False) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(): - templateSelectDropdown = gr.Dropdown( - label="从Prompt模板中加载", - choices=load_template( - get_template_names(plain=True)[0], mode=1 - ), - multiselect=False, - value=load_template( - get_template_names(plain=True)[0], mode=1 - )[0], - ).style(container=False) - - with gr.Tab(label="保存/加载"): - with gr.Accordion(label="保存/加载对话历史记录", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown( - label="从列表中加载对话", - choices=get_history_names(plain=True), - multiselect=False, - value=get_history_names(plain=True)[0], - ) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, - placeholder=f"设置文件名: 默认为.json,可选为.md", - label="设置保存文件名", - value="对话历史记录", - ).style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button("💾 保存对话") - exportMarkdownBtn = gr.Button("📝 导出为Markdown") - gr.Markdown("默认保存于history文件夹") - with gr.Row(): - with gr.Column(): - downloadFile = gr.File(interactive=True) - - with gr.Tab(label="高级"): - gr.Markdown("# ⚠️ 务必谨慎更改 ⚠️\n\n如果无法使用请恢复默认设置") - default_btn = gr.Button("🔙 恢复默认设置") - - with gr.Accordion("参数", open=False): - top_p = gr.Slider( - minimum=-0, - maximum=1.0, - value=1.0, - step=0.05, - interactive=True, - label="Top-p", - ) - temperature = gr.Slider( - minimum=-0, - maximum=2.0, - value=1.0, - step=0.1, - interactive=True, - label="Temperature", - ) - - with gr.Accordion("网络设置", open=False): - apiurlTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入API地址...", - label="API地址", - value="https://api.openai.com/v1/chat/completions", - lines=2, - ) - changeAPIURLBtn = gr.Button("🔄 切换API地址") - proxyTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入代理地址...", - label="代理地址(示例:http://127.0.0.1:10809)", - value="", - lines=2, - ) - changeProxyBtn = gr.Button("🔄 设置代理地址") - - gr.Markdown(description) - - chatgpt_predict_args = dict( - fn=predict, - inputs=[ - user_api_key, - systemPromptTxt, - history, - user_question, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files, - language_select_dropdown, - ], - outputs=[chatbot, history, status_display, token_count], - show_progress=True, - ) - - start_outputing_args = dict( - fn=start_outputing, - inputs=[], - outputs=[submitBtn, cancelBtn], - show_progress=True, - ) - - end_outputing_args = dict( - fn=end_outputing, inputs=[], outputs=[submitBtn, cancelBtn] - ) - - reset_textbox_args = dict( - fn=reset_textbox, inputs=[], outputs=[user_input] - ) - - transfer_input_args = dict( - fn=transfer_input, inputs=[user_input], outputs=[user_question, user_input], show_progress=True - ) - - get_usage_args = dict( - fn=get_usage, inputs=[user_api_key], outputs=[usageTxt], show_progress=False - ) - - # Chatbot - cancelBtn.click(cancel_outputing, [], []) - - user_input.submit(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - user_input.submit(**get_usage_args) - - submitBtn.click(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - submitBtn.click(**get_usage_args) - - emptyBtn.click( - reset_state, - outputs=[chatbot, history, token_count, status_display], - show_progress=True, - ) - emptyBtn.click(**reset_textbox_args) - - retryBtn.click(**reset_textbox_args) - retryBtn.click( - retry, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - language_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - retryBtn.click(**get_usage_args) - - delFirstBtn.click( - delete_first_conversation, - [history, token_count], - [history, token_count, status_display], - ) - - delLastBtn.click( - delete_last_conversation, - [chatbot, history, token_count], - [chatbot, history, token_count, status_display], - show_progress=True, - ) - - reduceTokenBtn.click( - reduce_token_size, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - gr.State(0), - model_select_dropdown, - language_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - reduceTokenBtn.click(**get_usage_args) - - # ChatGPT - keyTxt.change(submit_key, keyTxt, [user_api_key, status_display]).then(**get_usage_args) - keyTxt.submit(**get_usage_args) - - # Template - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - templateFileSelectDropdown.change( - load_template, - [templateFileSelectDropdown], - [promptTemplates, templateSelectDropdown], - show_progress=True, - ) - templateSelectDropdown.change( - get_template_content, - [promptTemplates, templateSelectDropdown, systemPromptTxt], - [systemPromptTxt], - show_progress=True, - ) - - # S&L - saveHistoryBtn.click( - save_chat_history, - [saveFileName, systemPromptTxt, history, chatbot], - downloadFile, - show_progress=True, - ) - saveHistoryBtn.click(get_history_names, None, [historyFileSelectDropdown]) - exportMarkdownBtn.click( - export_markdown, - [saveFileName, systemPromptTxt, history, chatbot], - downloadFile, - show_progress=True, - ) - historyRefreshBtn.click(get_history_names, None, [historyFileSelectDropdown]) - historyFileSelectDropdown.change( - load_chat_history, - [historyFileSelectDropdown, systemPromptTxt, history, chatbot], - [saveFileName, systemPromptTxt, history, chatbot], - show_progress=True, - ) - downloadFile.change( - load_chat_history, - [downloadFile, systemPromptTxt, history, chatbot], - [saveFileName, systemPromptTxt, history, chatbot], - ) - - # Advanced - default_btn.click( - reset_default, [], [apiurlTxt, proxyTxt, status_display], show_progress=True - ) - changeAPIURLBtn.click( - change_api_url, - [apiurlTxt], - [status_display], - show_progress=True, - ) - changeProxyBtn.click( - change_proxy, - [proxyTxt], - [status_display], - show_progress=True, - ) - -logging.info( - colorama.Back.GREEN - + "\n川虎的温馨提示:访问 http://localhost:7860 查看界面" - + colorama.Style.RESET_ALL -) -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "🚀" - -if __name__ == "__main__": - reload_javascript() - # if running in Docker - if dockerflag: - if authflag: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - server_name="0.0.0.0", - server_port=7860, - auth=(username, password), - favicon_path="./assets/favicon.ico", - ) - else: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - server_name="0.0.0.0", - server_port=7860, - share=False, - favicon_path="./assets/favicon.ico", - ) - # if not running in Docker - else: - if authflag: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - share=False, - auth=(username, password), - favicon_path="./assets/favicon.ico", - inbrowser=True, - ) - else: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - share=False, favicon_path="./assets/favicon.ico", inbrowser=True - ) # 改为 share=True 可以创建公开分享链接 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/testclient.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/testclient.py deleted file mode 100644 index 4012406aa76f743c5c5d1ab8ff56d6d67cfb6653..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/testclient.py +++ /dev/null @@ -1 +0,0 @@ -from starlette.testclient import TestClient as TestClient # noqa diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/mtiLib/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/mtiLib/__init__.py deleted file mode 100644 index dbedf275e3d3cfb2e8ec43eddd88b9d78ad53e15..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/mtiLib/__init__.py +++ /dev/null @@ -1,1402 +0,0 @@ -#!/usr/bin/python - -# FontDame-to-FontTools for OpenType Layout tables -# -# Source language spec is available at: -# http://monotype.github.io/OpenType_Table_Source/otl_source.html -# https://github.com/Monotype/OpenType_Table_Source/ - -from fontTools import ttLib -from fontTools.ttLib.tables._c_m_a_p import cmap_classes -from fontTools.ttLib.tables import otTables as ot -from fontTools.ttLib.tables.otBase import ValueRecord, valueRecordFormatDict -from fontTools.otlLib import builder as otl -from contextlib import contextmanager -from fontTools.ttLib import newTable -from fontTools.feaLib.lookupDebugInfo import LOOKUP_DEBUG_ENV_VAR, LOOKUP_DEBUG_INFO_KEY -from operator import setitem -import os -import logging - - -class MtiLibError(Exception): - pass - - -class ReferenceNotFoundError(MtiLibError): - pass - - -class FeatureNotFoundError(ReferenceNotFoundError): - pass - - -class LookupNotFoundError(ReferenceNotFoundError): - pass - - -log = logging.getLogger("fontTools.mtiLib") - - -def makeGlyph(s): - if s[:2] in ["U ", "u "]: - return ttLib.TTFont._makeGlyphName(int(s[2:], 16)) - elif s[:2] == "# ": - return "glyph%.5d" % int(s[2:]) - assert s.find(" ") < 0, "Space found in glyph name: %s" % s - assert s, "Glyph name is empty" - return s - - -def makeGlyphs(l): - return [makeGlyph(g) for g in l] - - -def mapLookup(sym, mapping): - # Lookups are addressed by name. So resolved them using a map if available. - # Fallback to parsing as lookup index if a map isn't provided. - if mapping is not None: - try: - idx = mapping[sym] - except KeyError: - raise LookupNotFoundError(sym) - else: - idx = int(sym) - return idx - - -def mapFeature(sym, mapping): - # Features are referenced by index according the spec. So, if symbol is an - # integer, use it directly. Otherwise look up in the map if provided. - try: - idx = int(sym) - except ValueError: - try: - idx = mapping[sym] - except KeyError: - raise FeatureNotFoundError(sym) - return idx - - -def setReference(mapper, mapping, sym, setter, collection, key): - try: - mapped = mapper(sym, mapping) - except ReferenceNotFoundError as e: - try: - if mapping is not None: - mapping.addDeferredMapping( - lambda ref: setter(collection, key, ref), sym, e - ) - return - except AttributeError: - pass - raise - setter(collection, key, mapped) - - -class DeferredMapping(dict): - def __init__(self): - self._deferredMappings = [] - - def addDeferredMapping(self, setter, sym, e): - log.debug("Adding deferred mapping for symbol '%s' %s", sym, type(e).__name__) - self._deferredMappings.append((setter, sym, e)) - - def applyDeferredMappings(self): - for setter, sym, e in self._deferredMappings: - log.debug( - "Applying deferred mapping for symbol '%s' %s", sym, type(e).__name__ - ) - try: - mapped = self[sym] - except KeyError: - raise e - setter(mapped) - log.debug("Set to %s", mapped) - self._deferredMappings = [] - - -def parseScriptList(lines, featureMap=None): - self = ot.ScriptList() - records = [] - with lines.between("script table"): - for line in lines: - while len(line) < 4: - line.append("") - scriptTag, langSysTag, defaultFeature, features = line - log.debug("Adding script %s language-system %s", scriptTag, langSysTag) - - langSys = ot.LangSys() - langSys.LookupOrder = None - if defaultFeature: - setReference( - mapFeature, - featureMap, - defaultFeature, - setattr, - langSys, - "ReqFeatureIndex", - ) - else: - langSys.ReqFeatureIndex = 0xFFFF - syms = stripSplitComma(features) - langSys.FeatureIndex = theList = [3] * len(syms) - for i, sym in enumerate(syms): - setReference(mapFeature, featureMap, sym, setitem, theList, i) - langSys.FeatureCount = len(langSys.FeatureIndex) - - script = [s for s in records if s.ScriptTag == scriptTag] - if script: - script = script[0].Script - else: - scriptRec = ot.ScriptRecord() - scriptRec.ScriptTag = scriptTag + " " * (4 - len(scriptTag)) - scriptRec.Script = ot.Script() - records.append(scriptRec) - script = scriptRec.Script - script.DefaultLangSys = None - script.LangSysRecord = [] - script.LangSysCount = 0 - - if langSysTag == "default": - script.DefaultLangSys = langSys - else: - langSysRec = ot.LangSysRecord() - langSysRec.LangSysTag = langSysTag + " " * (4 - len(langSysTag)) - langSysRec.LangSys = langSys - script.LangSysRecord.append(langSysRec) - script.LangSysCount = len(script.LangSysRecord) - - for script in records: - script.Script.LangSysRecord = sorted( - script.Script.LangSysRecord, key=lambda rec: rec.LangSysTag - ) - self.ScriptRecord = sorted(records, key=lambda rec: rec.ScriptTag) - self.ScriptCount = len(self.ScriptRecord) - return self - - -def parseFeatureList(lines, lookupMap=None, featureMap=None): - self = ot.FeatureList() - self.FeatureRecord = [] - with lines.between("feature table"): - for line in lines: - name, featureTag, lookups = line - if featureMap is not None: - assert name not in featureMap, "Duplicate feature name: %s" % name - featureMap[name] = len(self.FeatureRecord) - # If feature name is integer, make sure it matches its index. - try: - assert int(name) == len(self.FeatureRecord), "%d %d" % ( - name, - len(self.FeatureRecord), - ) - except ValueError: - pass - featureRec = ot.FeatureRecord() - featureRec.FeatureTag = featureTag - featureRec.Feature = ot.Feature() - self.FeatureRecord.append(featureRec) - feature = featureRec.Feature - feature.FeatureParams = None - syms = stripSplitComma(lookups) - feature.LookupListIndex = theList = [None] * len(syms) - for i, sym in enumerate(syms): - setReference(mapLookup, lookupMap, sym, setitem, theList, i) - feature.LookupCount = len(feature.LookupListIndex) - - self.FeatureCount = len(self.FeatureRecord) - return self - - -def parseLookupFlags(lines): - flags = 0 - filterset = None - allFlags = [ - "righttoleft", - "ignorebaseglyphs", - "ignoreligatures", - "ignoremarks", - "markattachmenttype", - "markfiltertype", - ] - while lines.peeks()[0].lower() in allFlags: - line = next(lines) - flag = { - "righttoleft": 0x0001, - "ignorebaseglyphs": 0x0002, - "ignoreligatures": 0x0004, - "ignoremarks": 0x0008, - }.get(line[0].lower()) - if flag: - assert line[1].lower() in ["yes", "no"], line[1] - if line[1].lower() == "yes": - flags |= flag - continue - if line[0].lower() == "markattachmenttype": - flags |= int(line[1]) << 8 - continue - if line[0].lower() == "markfiltertype": - flags |= 0x10 - filterset = int(line[1]) - return flags, filterset - - -def parseSingleSubst(lines, font, _lookupMap=None): - mapping = {} - for line in lines: - assert len(line) == 2, line - line = makeGlyphs(line) - mapping[line[0]] = line[1] - return otl.buildSingleSubstSubtable(mapping) - - -def parseMultiple(lines, font, _lookupMap=None): - mapping = {} - for line in lines: - line = makeGlyphs(line) - mapping[line[0]] = line[1:] - return otl.buildMultipleSubstSubtable(mapping) - - -def parseAlternate(lines, font, _lookupMap=None): - mapping = {} - for line in lines: - line = makeGlyphs(line) - mapping[line[0]] = line[1:] - return otl.buildAlternateSubstSubtable(mapping) - - -def parseLigature(lines, font, _lookupMap=None): - mapping = {} - for line in lines: - assert len(line) >= 2, line - line = makeGlyphs(line) - mapping[tuple(line[1:])] = line[0] - return otl.buildLigatureSubstSubtable(mapping) - - -def parseSinglePos(lines, font, _lookupMap=None): - values = {} - for line in lines: - assert len(line) == 3, line - w = line[0].title().replace(" ", "") - assert w in valueRecordFormatDict - g = makeGlyph(line[1]) - v = int(line[2]) - if g not in values: - values[g] = ValueRecord() - assert not hasattr(values[g], w), (g, w) - setattr(values[g], w, v) - return otl.buildSinglePosSubtable(values, font.getReverseGlyphMap()) - - -def parsePair(lines, font, _lookupMap=None): - self = ot.PairPos() - self.ValueFormat1 = self.ValueFormat2 = 0 - typ = lines.peeks()[0].split()[0].lower() - if typ in ("left", "right"): - self.Format = 1 - values = {} - for line in lines: - assert len(line) == 4, line - side = line[0].split()[0].lower() - assert side in ("left", "right"), side - what = line[0][len(side) :].title().replace(" ", "") - mask = valueRecordFormatDict[what][0] - glyph1, glyph2 = makeGlyphs(line[1:3]) - value = int(line[3]) - if not glyph1 in values: - values[glyph1] = {} - if not glyph2 in values[glyph1]: - values[glyph1][glyph2] = (ValueRecord(), ValueRecord()) - rec2 = values[glyph1][glyph2] - if side == "left": - self.ValueFormat1 |= mask - vr = rec2[0] - else: - self.ValueFormat2 |= mask - vr = rec2[1] - assert not hasattr(vr, what), (vr, what) - setattr(vr, what, value) - self.Coverage = makeCoverage(set(values.keys()), font) - self.PairSet = [] - for glyph1 in self.Coverage.glyphs: - values1 = values[glyph1] - pairset = ot.PairSet() - records = pairset.PairValueRecord = [] - for glyph2 in sorted(values1.keys(), key=font.getGlyphID): - values2 = values1[glyph2] - pair = ot.PairValueRecord() - pair.SecondGlyph = glyph2 - pair.Value1 = values2[0] - pair.Value2 = values2[1] if self.ValueFormat2 else None - records.append(pair) - pairset.PairValueCount = len(pairset.PairValueRecord) - self.PairSet.append(pairset) - self.PairSetCount = len(self.PairSet) - elif typ.endswith("class"): - self.Format = 2 - classDefs = [None, None] - while lines.peeks()[0].endswith("class definition begin"): - typ = lines.peek()[0][: -len("class definition begin")].lower() - idx, klass = { - "first": (0, ot.ClassDef1), - "second": (1, ot.ClassDef2), - }[typ] - assert classDefs[idx] is None - classDefs[idx] = parseClassDef(lines, font, klass=klass) - self.ClassDef1, self.ClassDef2 = classDefs - self.Class1Count, self.Class2Count = ( - 1 + max(c.classDefs.values()) for c in classDefs - ) - self.Class1Record = [ot.Class1Record() for i in range(self.Class1Count)] - for rec1 in self.Class1Record: - rec1.Class2Record = [ot.Class2Record() for j in range(self.Class2Count)] - for rec2 in rec1.Class2Record: - rec2.Value1 = ValueRecord() - rec2.Value2 = ValueRecord() - for line in lines: - assert len(line) == 4, line - side = line[0].split()[0].lower() - assert side in ("left", "right"), side - what = line[0][len(side) :].title().replace(" ", "") - mask = valueRecordFormatDict[what][0] - class1, class2, value = (int(x) for x in line[1:4]) - rec2 = self.Class1Record[class1].Class2Record[class2] - if side == "left": - self.ValueFormat1 |= mask - vr = rec2.Value1 - else: - self.ValueFormat2 |= mask - vr = rec2.Value2 - assert not hasattr(vr, what), (vr, what) - setattr(vr, what, value) - for rec1 in self.Class1Record: - for rec2 in rec1.Class2Record: - rec2.Value1 = ValueRecord(self.ValueFormat1, rec2.Value1) - rec2.Value2 = ( - ValueRecord(self.ValueFormat2, rec2.Value2) - if self.ValueFormat2 - else None - ) - - self.Coverage = makeCoverage(set(self.ClassDef1.classDefs.keys()), font) - else: - assert 0, typ - return self - - -def parseKernset(lines, font, _lookupMap=None): - typ = lines.peeks()[0].split()[0].lower() - if typ in ("left", "right"): - with lines.until( - ("firstclass definition begin", "secondclass definition begin") - ): - return parsePair(lines, font) - return parsePair(lines, font) - - -def makeAnchor(data, klass=ot.Anchor): - assert len(data) <= 2 - anchor = klass() - anchor.Format = 1 - anchor.XCoordinate, anchor.YCoordinate = intSplitComma(data[0]) - if len(data) > 1 and data[1] != "": - anchor.Format = 2 - anchor.AnchorPoint = int(data[1]) - return anchor - - -def parseCursive(lines, font, _lookupMap=None): - records = {} - for line in lines: - assert len(line) in [3, 4], line - idx, klass = { - "entry": (0, ot.EntryAnchor), - "exit": (1, ot.ExitAnchor), - }[line[0]] - glyph = makeGlyph(line[1]) - if glyph not in records: - records[glyph] = [None, None] - assert records[glyph][idx] is None, (glyph, idx) - records[glyph][idx] = makeAnchor(line[2:], klass) - return otl.buildCursivePosSubtable(records, font.getReverseGlyphMap()) - - -def makeMarkRecords(data, coverage, c): - records = [] - for glyph in coverage.glyphs: - klass, anchor = data[glyph] - record = c.MarkRecordClass() - record.Class = klass - setattr(record, c.MarkAnchor, anchor) - records.append(record) - return records - - -def makeBaseRecords(data, coverage, c, classCount): - records = [] - idx = {} - for glyph in coverage.glyphs: - idx[glyph] = len(records) - record = c.BaseRecordClass() - anchors = [None] * classCount - setattr(record, c.BaseAnchor, anchors) - records.append(record) - for (glyph, klass), anchor in data.items(): - record = records[idx[glyph]] - anchors = getattr(record, c.BaseAnchor) - assert anchors[klass] is None, (glyph, klass) - anchors[klass] = anchor - return records - - -def makeLigatureRecords(data, coverage, c, classCount): - records = [None] * len(coverage.glyphs) - idx = {g: i for i, g in enumerate(coverage.glyphs)} - - for (glyph, klass, compIdx, compCount), anchor in data.items(): - record = records[idx[glyph]] - if record is None: - record = records[idx[glyph]] = ot.LigatureAttach() - record.ComponentCount = compCount - record.ComponentRecord = [ot.ComponentRecord() for i in range(compCount)] - for compRec in record.ComponentRecord: - compRec.LigatureAnchor = [None] * classCount - assert record.ComponentCount == compCount, ( - glyph, - record.ComponentCount, - compCount, - ) - - anchors = record.ComponentRecord[compIdx - 1].LigatureAnchor - assert anchors[klass] is None, (glyph, compIdx, klass) - anchors[klass] = anchor - return records - - -def parseMarkToSomething(lines, font, c): - self = c.Type() - self.Format = 1 - markData = {} - baseData = {} - Data = { - "mark": (markData, c.MarkAnchorClass), - "base": (baseData, c.BaseAnchorClass), - "ligature": (baseData, c.BaseAnchorClass), - } - maxKlass = 0 - for line in lines: - typ = line[0] - assert typ in ("mark", "base", "ligature") - glyph = makeGlyph(line[1]) - data, anchorClass = Data[typ] - extraItems = 2 if typ == "ligature" else 0 - extras = tuple(int(i) for i in line[2 : 2 + extraItems]) - klass = int(line[2 + extraItems]) - anchor = makeAnchor(line[3 + extraItems :], anchorClass) - if typ == "mark": - key, value = glyph, (klass, anchor) - else: - key, value = ((glyph, klass) + extras), anchor - assert key not in data, key - data[key] = value - maxKlass = max(maxKlass, klass) - - # Mark - markCoverage = makeCoverage(set(markData.keys()), font, c.MarkCoverageClass) - markArray = c.MarkArrayClass() - markRecords = makeMarkRecords(markData, markCoverage, c) - setattr(markArray, c.MarkRecord, markRecords) - setattr(markArray, c.MarkCount, len(markRecords)) - setattr(self, c.MarkCoverage, markCoverage) - setattr(self, c.MarkArray, markArray) - self.ClassCount = maxKlass + 1 - - # Base - self.classCount = 0 if not baseData else 1 + max(k[1] for k, v in baseData.items()) - baseCoverage = makeCoverage( - set([k[0] for k in baseData.keys()]), font, c.BaseCoverageClass - ) - baseArray = c.BaseArrayClass() - if c.Base == "Ligature": - baseRecords = makeLigatureRecords(baseData, baseCoverage, c, self.classCount) - else: - baseRecords = makeBaseRecords(baseData, baseCoverage, c, self.classCount) - setattr(baseArray, c.BaseRecord, baseRecords) - setattr(baseArray, c.BaseCount, len(baseRecords)) - setattr(self, c.BaseCoverage, baseCoverage) - setattr(self, c.BaseArray, baseArray) - - return self - - -class MarkHelper(object): - def __init__(self): - for Which in ("Mark", "Base"): - for What in ("Coverage", "Array", "Count", "Record", "Anchor"): - key = Which + What - if Which == "Mark" and What in ("Count", "Record", "Anchor"): - value = key - else: - value = getattr(self, Which) + What - if value == "LigatureRecord": - value = "LigatureAttach" - setattr(self, key, value) - if What != "Count": - klass = getattr(ot, value) - setattr(self, key + "Class", klass) - - -class MarkToBaseHelper(MarkHelper): - Mark = "Mark" - Base = "Base" - Type = ot.MarkBasePos - - -class MarkToMarkHelper(MarkHelper): - Mark = "Mark1" - Base = "Mark2" - Type = ot.MarkMarkPos - - -class MarkToLigatureHelper(MarkHelper): - Mark = "Mark" - Base = "Ligature" - Type = ot.MarkLigPos - - -def parseMarkToBase(lines, font, _lookupMap=None): - return parseMarkToSomething(lines, font, MarkToBaseHelper()) - - -def parseMarkToMark(lines, font, _lookupMap=None): - return parseMarkToSomething(lines, font, MarkToMarkHelper()) - - -def parseMarkToLigature(lines, font, _lookupMap=None): - return parseMarkToSomething(lines, font, MarkToLigatureHelper()) - - -def stripSplitComma(line): - return [s.strip() for s in line.split(",")] if line else [] - - -def intSplitComma(line): - return [int(i) for i in line.split(",")] if line else [] - - -# Copied from fontTools.subset -class ContextHelper(object): - def __init__(self, klassName, Format): - if klassName.endswith("Subst"): - Typ = "Sub" - Type = "Subst" - else: - Typ = "Pos" - Type = "Pos" - if klassName.startswith("Chain"): - Chain = "Chain" - InputIdx = 1 - DataLen = 3 - else: - Chain = "" - InputIdx = 0 - DataLen = 1 - ChainTyp = Chain + Typ - - self.Typ = Typ - self.Type = Type - self.Chain = Chain - self.ChainTyp = ChainTyp - self.InputIdx = InputIdx - self.DataLen = DataLen - - self.LookupRecord = Type + "LookupRecord" - - if Format == 1: - Coverage = lambda r: r.Coverage - ChainCoverage = lambda r: r.Coverage - ContextData = lambda r: (None,) - ChainContextData = lambda r: (None, None, None) - SetContextData = None - SetChainContextData = None - RuleData = lambda r: (r.Input,) - ChainRuleData = lambda r: (r.Backtrack, r.Input, r.LookAhead) - - def SetRuleData(r, d): - (r.Input,) = d - (r.GlyphCount,) = (len(x) + 1 for x in d) - - def ChainSetRuleData(r, d): - (r.Backtrack, r.Input, r.LookAhead) = d - ( - r.BacktrackGlyphCount, - r.InputGlyphCount, - r.LookAheadGlyphCount, - ) = (len(d[0]), len(d[1]) + 1, len(d[2])) - - elif Format == 2: - Coverage = lambda r: r.Coverage - ChainCoverage = lambda r: r.Coverage - ContextData = lambda r: (r.ClassDef,) - ChainContextData = lambda r: ( - r.BacktrackClassDef, - r.InputClassDef, - r.LookAheadClassDef, - ) - - def SetContextData(r, d): - (r.ClassDef,) = d - - def SetChainContextData(r, d): - (r.BacktrackClassDef, r.InputClassDef, r.LookAheadClassDef) = d - - RuleData = lambda r: (r.Class,) - ChainRuleData = lambda r: (r.Backtrack, r.Input, r.LookAhead) - - def SetRuleData(r, d): - (r.Class,) = d - (r.GlyphCount,) = (len(x) + 1 for x in d) - - def ChainSetRuleData(r, d): - (r.Backtrack, r.Input, r.LookAhead) = d - ( - r.BacktrackGlyphCount, - r.InputGlyphCount, - r.LookAheadGlyphCount, - ) = (len(d[0]), len(d[1]) + 1, len(d[2])) - - elif Format == 3: - Coverage = lambda r: r.Coverage[0] - ChainCoverage = lambda r: r.InputCoverage[0] - ContextData = None - ChainContextData = None - SetContextData = None - SetChainContextData = None - RuleData = lambda r: r.Coverage - ChainRuleData = lambda r: ( - r.BacktrackCoverage + r.InputCoverage + r.LookAheadCoverage - ) - - def SetRuleData(r, d): - (r.Coverage,) = d - (r.GlyphCount,) = (len(x) for x in d) - - def ChainSetRuleData(r, d): - (r.BacktrackCoverage, r.InputCoverage, r.LookAheadCoverage) = d - ( - r.BacktrackGlyphCount, - r.InputGlyphCount, - r.LookAheadGlyphCount, - ) = (len(x) for x in d) - - else: - assert 0, "unknown format: %s" % Format - - if Chain: - self.Coverage = ChainCoverage - self.ContextData = ChainContextData - self.SetContextData = SetChainContextData - self.RuleData = ChainRuleData - self.SetRuleData = ChainSetRuleData - else: - self.Coverage = Coverage - self.ContextData = ContextData - self.SetContextData = SetContextData - self.RuleData = RuleData - self.SetRuleData = SetRuleData - - if Format == 1: - self.Rule = ChainTyp + "Rule" - self.RuleCount = ChainTyp + "RuleCount" - self.RuleSet = ChainTyp + "RuleSet" - self.RuleSetCount = ChainTyp + "RuleSetCount" - self.Intersect = lambda glyphs, c, r: [r] if r in glyphs else [] - elif Format == 2: - self.Rule = ChainTyp + "ClassRule" - self.RuleCount = ChainTyp + "ClassRuleCount" - self.RuleSet = ChainTyp + "ClassSet" - self.RuleSetCount = ChainTyp + "ClassSetCount" - self.Intersect = lambda glyphs, c, r: ( - c.intersect_class(glyphs, r) - if c - else (set(glyphs) if r == 0 else set()) - ) - - self.ClassDef = "InputClassDef" if Chain else "ClassDef" - self.ClassDefIndex = 1 if Chain else 0 - self.Input = "Input" if Chain else "Class" - - -def parseLookupRecords(items, klassName, lookupMap=None): - klass = getattr(ot, klassName) - lst = [] - for item in items: - rec = klass() - item = stripSplitComma(item) - assert len(item) == 2, item - idx = int(item[0]) - assert idx > 0, idx - rec.SequenceIndex = idx - 1 - setReference(mapLookup, lookupMap, item[1], setattr, rec, "LookupListIndex") - lst.append(rec) - return lst - - -def makeClassDef(classDefs, font, klass=ot.Coverage): - if not classDefs: - return None - self = klass() - self.classDefs = dict(classDefs) - return self - - -def parseClassDef(lines, font, klass=ot.ClassDef): - classDefs = {} - with lines.between("class definition"): - for line in lines: - glyph = makeGlyph(line[0]) - assert glyph not in classDefs, glyph - classDefs[glyph] = int(line[1]) - return makeClassDef(classDefs, font, klass) - - -def makeCoverage(glyphs, font, klass=ot.Coverage): - if not glyphs: - return None - if isinstance(glyphs, set): - glyphs = sorted(glyphs) - coverage = klass() - coverage.glyphs = sorted(set(glyphs), key=font.getGlyphID) - return coverage - - -def parseCoverage(lines, font, klass=ot.Coverage): - glyphs = [] - with lines.between("coverage definition"): - for line in lines: - glyphs.append(makeGlyph(line[0])) - return makeCoverage(glyphs, font, klass) - - -def bucketizeRules(self, c, rules, bucketKeys): - buckets = {} - for seq, recs in rules: - buckets.setdefault(seq[c.InputIdx][0], []).append( - (tuple(s[1 if i == c.InputIdx else 0 :] for i, s in enumerate(seq)), recs) - ) - - rulesets = [] - for firstGlyph in bucketKeys: - if firstGlyph not in buckets: - rulesets.append(None) - continue - thisRules = [] - for seq, recs in buckets[firstGlyph]: - rule = getattr(ot, c.Rule)() - c.SetRuleData(rule, seq) - setattr(rule, c.Type + "Count", len(recs)) - setattr(rule, c.LookupRecord, recs) - thisRules.append(rule) - - ruleset = getattr(ot, c.RuleSet)() - setattr(ruleset, c.Rule, thisRules) - setattr(ruleset, c.RuleCount, len(thisRules)) - rulesets.append(ruleset) - - setattr(self, c.RuleSet, rulesets) - setattr(self, c.RuleSetCount, len(rulesets)) - - -def parseContext(lines, font, Type, lookupMap=None): - self = getattr(ot, Type)() - typ = lines.peeks()[0].split()[0].lower() - if typ == "glyph": - self.Format = 1 - log.debug("Parsing %s format %s", Type, self.Format) - c = ContextHelper(Type, self.Format) - rules = [] - for line in lines: - assert line[0].lower() == "glyph", line[0] - while len(line) < 1 + c.DataLen: - line.append("") - seq = tuple(makeGlyphs(stripSplitComma(i)) for i in line[1 : 1 + c.DataLen]) - recs = parseLookupRecords(line[1 + c.DataLen :], c.LookupRecord, lookupMap) - rules.append((seq, recs)) - - firstGlyphs = set(seq[c.InputIdx][0] for seq, recs in rules) - self.Coverage = makeCoverage(firstGlyphs, font) - bucketizeRules(self, c, rules, self.Coverage.glyphs) - elif typ.endswith("class"): - self.Format = 2 - log.debug("Parsing %s format %s", Type, self.Format) - c = ContextHelper(Type, self.Format) - classDefs = [None] * c.DataLen - while lines.peeks()[0].endswith("class definition begin"): - typ = lines.peek()[0][: -len("class definition begin")].lower() - idx, klass = { - 1: { - "": (0, ot.ClassDef), - }, - 3: { - "backtrack": (0, ot.BacktrackClassDef), - "": (1, ot.InputClassDef), - "lookahead": (2, ot.LookAheadClassDef), - }, - }[c.DataLen][typ] - assert classDefs[idx] is None, idx - classDefs[idx] = parseClassDef(lines, font, klass=klass) - c.SetContextData(self, classDefs) - rules = [] - for line in lines: - assert line[0].lower().startswith("class"), line[0] - while len(line) < 1 + c.DataLen: - line.append("") - seq = tuple(intSplitComma(i) for i in line[1 : 1 + c.DataLen]) - recs = parseLookupRecords(line[1 + c.DataLen :], c.LookupRecord, lookupMap) - rules.append((seq, recs)) - firstClasses = set(seq[c.InputIdx][0] for seq, recs in rules) - firstGlyphs = set( - g for g, c in classDefs[c.InputIdx].classDefs.items() if c in firstClasses - ) - self.Coverage = makeCoverage(firstGlyphs, font) - bucketizeRules(self, c, rules, range(max(firstClasses) + 1)) - elif typ.endswith("coverage"): - self.Format = 3 - log.debug("Parsing %s format %s", Type, self.Format) - c = ContextHelper(Type, self.Format) - coverages = tuple([] for i in range(c.DataLen)) - while lines.peeks()[0].endswith("coverage definition begin"): - typ = lines.peek()[0][: -len("coverage definition begin")].lower() - idx, klass = { - 1: { - "": (0, ot.Coverage), - }, - 3: { - "backtrack": (0, ot.BacktrackCoverage), - "input": (1, ot.InputCoverage), - "lookahead": (2, ot.LookAheadCoverage), - }, - }[c.DataLen][typ] - coverages[idx].append(parseCoverage(lines, font, klass=klass)) - c.SetRuleData(self, coverages) - lines = list(lines) - assert len(lines) == 1 - line = lines[0] - assert line[0].lower() == "coverage", line[0] - recs = parseLookupRecords(line[1:], c.LookupRecord, lookupMap) - setattr(self, c.Type + "Count", len(recs)) - setattr(self, c.LookupRecord, recs) - else: - assert 0, typ - return self - - -def parseContextSubst(lines, font, lookupMap=None): - return parseContext(lines, font, "ContextSubst", lookupMap=lookupMap) - - -def parseContextPos(lines, font, lookupMap=None): - return parseContext(lines, font, "ContextPos", lookupMap=lookupMap) - - -def parseChainedSubst(lines, font, lookupMap=None): - return parseContext(lines, font, "ChainContextSubst", lookupMap=lookupMap) - - -def parseChainedPos(lines, font, lookupMap=None): - return parseContext(lines, font, "ChainContextPos", lookupMap=lookupMap) - - -def parseReverseChainedSubst(lines, font, _lookupMap=None): - self = ot.ReverseChainSingleSubst() - self.Format = 1 - coverages = ([], []) - while lines.peeks()[0].endswith("coverage definition begin"): - typ = lines.peek()[0][: -len("coverage definition begin")].lower() - idx, klass = { - "backtrack": (0, ot.BacktrackCoverage), - "lookahead": (1, ot.LookAheadCoverage), - }[typ] - coverages[idx].append(parseCoverage(lines, font, klass=klass)) - self.BacktrackCoverage = coverages[0] - self.BacktrackGlyphCount = len(self.BacktrackCoverage) - self.LookAheadCoverage = coverages[1] - self.LookAheadGlyphCount = len(self.LookAheadCoverage) - mapping = {} - for line in lines: - assert len(line) == 2, line - line = makeGlyphs(line) - mapping[line[0]] = line[1] - self.Coverage = makeCoverage(set(mapping.keys()), font) - self.Substitute = [mapping[k] for k in self.Coverage.glyphs] - self.GlyphCount = len(self.Substitute) - return self - - -def parseLookup(lines, tableTag, font, lookupMap=None): - line = lines.expect("lookup") - _, name, typ = line - log.debug("Parsing lookup type %s %s", typ, name) - lookup = ot.Lookup() - lookup.LookupFlag, filterset = parseLookupFlags(lines) - if filterset is not None: - lookup.MarkFilteringSet = filterset - lookup.LookupType, parseLookupSubTable = { - "GSUB": { - "single": (1, parseSingleSubst), - "multiple": (2, parseMultiple), - "alternate": (3, parseAlternate), - "ligature": (4, parseLigature), - "context": (5, parseContextSubst), - "chained": (6, parseChainedSubst), - "reversechained": (8, parseReverseChainedSubst), - }, - "GPOS": { - "single": (1, parseSinglePos), - "pair": (2, parsePair), - "kernset": (2, parseKernset), - "cursive": (3, parseCursive), - "mark to base": (4, parseMarkToBase), - "mark to ligature": (5, parseMarkToLigature), - "mark to mark": (6, parseMarkToMark), - "context": (7, parseContextPos), - "chained": (8, parseChainedPos), - }, - }[tableTag][typ] - - with lines.until("lookup end"): - subtables = [] - - while lines.peek(): - with lines.until(("% subtable", "subtable end")): - while lines.peek(): - subtable = parseLookupSubTable(lines, font, lookupMap) - assert lookup.LookupType == subtable.LookupType - subtables.append(subtable) - if lines.peeks()[0] in ("% subtable", "subtable end"): - next(lines) - lines.expect("lookup end") - - lookup.SubTable = subtables - lookup.SubTableCount = len(lookup.SubTable) - if lookup.SubTableCount == 0: - # Remove this return when following is fixed: - # https://github.com/fonttools/fonttools/issues/789 - return None - return lookup - - -def parseGSUBGPOS(lines, font, tableTag): - container = ttLib.getTableClass(tableTag)() - lookupMap = DeferredMapping() - featureMap = DeferredMapping() - assert tableTag in ("GSUB", "GPOS") - log.debug("Parsing %s", tableTag) - self = getattr(ot, tableTag)() - self.Version = 0x00010000 - fields = { - "script table begin": ( - "ScriptList", - lambda lines: parseScriptList(lines, featureMap), - ), - "feature table begin": ( - "FeatureList", - lambda lines: parseFeatureList(lines, lookupMap, featureMap), - ), - "lookup": ("LookupList", None), - } - for attr, parser in fields.values(): - setattr(self, attr, None) - while lines.peek() is not None: - typ = lines.peek()[0].lower() - if typ not in fields: - log.debug("Skipping %s", lines.peek()) - next(lines) - continue - attr, parser = fields[typ] - if typ == "lookup": - if self.LookupList is None: - self.LookupList = ot.LookupList() - self.LookupList.Lookup = [] - _, name, _ = lines.peek() - lookup = parseLookup(lines, tableTag, font, lookupMap) - if lookupMap is not None: - assert name not in lookupMap, "Duplicate lookup name: %s" % name - lookupMap[name] = len(self.LookupList.Lookup) - else: - assert int(name) == len(self.LookupList.Lookup), "%d %d" % ( - name, - len(self.Lookup), - ) - self.LookupList.Lookup.append(lookup) - else: - assert getattr(self, attr) is None, attr - setattr(self, attr, parser(lines)) - if self.LookupList: - self.LookupList.LookupCount = len(self.LookupList.Lookup) - if lookupMap is not None: - lookupMap.applyDeferredMappings() - if os.environ.get(LOOKUP_DEBUG_ENV_VAR): - if "Debg" not in font: - font["Debg"] = newTable("Debg") - font["Debg"].data = {} - debug = ( - font["Debg"] - .data.setdefault(LOOKUP_DEBUG_INFO_KEY, {}) - .setdefault(tableTag, {}) - ) - for name, lookup in lookupMap.items(): - debug[str(lookup)] = ["", name, ""] - - featureMap.applyDeferredMappings() - container.table = self - return container - - -def parseGSUB(lines, font): - return parseGSUBGPOS(lines, font, "GSUB") - - -def parseGPOS(lines, font): - return parseGSUBGPOS(lines, font, "GPOS") - - -def parseAttachList(lines, font): - points = {} - with lines.between("attachment list"): - for line in lines: - glyph = makeGlyph(line[0]) - assert glyph not in points, glyph - points[glyph] = [int(i) for i in line[1:]] - return otl.buildAttachList(points, font.getReverseGlyphMap()) - - -def parseCaretList(lines, font): - carets = {} - with lines.between("carets"): - for line in lines: - glyph = makeGlyph(line[0]) - assert glyph not in carets, glyph - num = int(line[1]) - thisCarets = [int(i) for i in line[2:]] - assert num == len(thisCarets), line - carets[glyph] = thisCarets - return otl.buildLigCaretList(carets, {}, font.getReverseGlyphMap()) - - -def makeMarkFilteringSets(sets, font): - self = ot.MarkGlyphSetsDef() - self.MarkSetTableFormat = 1 - self.MarkSetCount = 1 + max(sets.keys()) - self.Coverage = [None] * self.MarkSetCount - for k, v in sorted(sets.items()): - self.Coverage[k] = makeCoverage(set(v), font) - return self - - -def parseMarkFilteringSets(lines, font): - sets = {} - with lines.between("set definition"): - for line in lines: - assert len(line) == 2, line - glyph = makeGlyph(line[0]) - # TODO accept set names - st = int(line[1]) - if st not in sets: - sets[st] = [] - sets[st].append(glyph) - return makeMarkFilteringSets(sets, font) - - -def parseGDEF(lines, font): - container = ttLib.getTableClass("GDEF")() - log.debug("Parsing GDEF") - self = ot.GDEF() - fields = { - "class definition begin": ( - "GlyphClassDef", - lambda lines, font: parseClassDef(lines, font, klass=ot.GlyphClassDef), - ), - "attachment list begin": ("AttachList", parseAttachList), - "carets begin": ("LigCaretList", parseCaretList), - "mark attachment class definition begin": ( - "MarkAttachClassDef", - lambda lines, font: parseClassDef(lines, font, klass=ot.MarkAttachClassDef), - ), - "markfilter set definition begin": ("MarkGlyphSetsDef", parseMarkFilteringSets), - } - for attr, parser in fields.values(): - setattr(self, attr, None) - while lines.peek() is not None: - typ = lines.peek()[0].lower() - if typ not in fields: - log.debug("Skipping %s", typ) - next(lines) - continue - attr, parser = fields[typ] - assert getattr(self, attr) is None, attr - setattr(self, attr, parser(lines, font)) - self.Version = 0x00010000 if self.MarkGlyphSetsDef is None else 0x00010002 - container.table = self - return container - - -def parseCmap(lines, font): - container = ttLib.getTableClass("cmap")() - log.debug("Parsing cmap") - tables = [] - while lines.peek() is not None: - lines.expect("cmap subtable %d" % len(tables)) - platId, encId, fmt, lang = [ - parseCmapId(lines, field) - for field in ("platformID", "encodingID", "format", "language") - ] - table = cmap_classes[fmt](fmt) - table.platformID = platId - table.platEncID = encId - table.language = lang - table.cmap = {} - line = next(lines) - while line[0] != "end subtable": - table.cmap[int(line[0], 16)] = line[1] - line = next(lines) - tables.append(table) - container.tableVersion = 0 - container.tables = tables - return container - - -def parseCmapId(lines, field): - line = next(lines) - assert field == line[0] - return int(line[1]) - - -def parseTable(lines, font, tableTag=None): - log.debug("Parsing table") - line = lines.peeks() - tag = None - if line[0].split()[0] == "FontDame": - tag = line[0].split()[1] - elif " ".join(line[0].split()[:3]) == "Font Chef Table": - tag = line[0].split()[3] - if tag is not None: - next(lines) - tag = tag.ljust(4) - if tableTag is None: - tableTag = tag - else: - assert tableTag == tag, (tableTag, tag) - - assert ( - tableTag is not None - ), "Don't know what table to parse and data doesn't specify" - - return { - "GSUB": parseGSUB, - "GPOS": parseGPOS, - "GDEF": parseGDEF, - "cmap": parseCmap, - }[tableTag](lines, font) - - -class Tokenizer(object): - def __init__(self, f): - # TODO BytesIO / StringIO as needed? also, figure out whether we work on bytes or unicode - lines = iter(f) - try: - self.filename = f.name - except: - self.filename = None - self.lines = iter(lines) - self.line = "" - self.lineno = 0 - self.stoppers = [] - self.buffer = None - - def __iter__(self): - return self - - def _next_line(self): - self.lineno += 1 - line = self.line = next(self.lines) - line = [s.strip() for s in line.split("\t")] - if len(line) == 1 and not line[0]: - del line[0] - if line and not line[-1]: - log.warning("trailing tab found on line %d: %s" % (self.lineno, self.line)) - while line and not line[-1]: - del line[-1] - return line - - def _next_nonempty(self): - while True: - line = self._next_line() - # Skip comments and empty lines - if line and line[0] and (line[0][0] != "%" or line[0] == "% subtable"): - return line - - def _next_buffered(self): - if self.buffer: - ret = self.buffer - self.buffer = None - return ret - else: - return self._next_nonempty() - - def __next__(self): - line = self._next_buffered() - if line[0].lower() in self.stoppers: - self.buffer = line - raise StopIteration - return line - - def next(self): - return self.__next__() - - def peek(self): - if not self.buffer: - try: - self.buffer = self._next_nonempty() - except StopIteration: - return None - if self.buffer[0].lower() in self.stoppers: - return None - return self.buffer - - def peeks(self): - ret = self.peek() - return ret if ret is not None else ("",) - - @contextmanager - def between(self, tag): - start = tag + " begin" - end = tag + " end" - self.expectendswith(start) - self.stoppers.append(end) - yield - del self.stoppers[-1] - self.expect(tag + " end") - - @contextmanager - def until(self, tags): - if type(tags) is not tuple: - tags = (tags,) - self.stoppers.extend(tags) - yield - del self.stoppers[-len(tags) :] - - def expect(self, s): - line = next(self) - tag = line[0].lower() - assert tag == s, "Expected '%s', got '%s'" % (s, tag) - return line - - def expectendswith(self, s): - line = next(self) - tag = line[0].lower() - assert tag.endswith(s), "Expected '*%s', got '%s'" % (s, tag) - return line - - -def build(f, font, tableTag=None): - """Convert a Monotype font layout file to an OpenType layout object - - A font object must be passed, but this may be a "dummy" font; it is only - used for sorting glyph sets when making coverage tables and to hold the - OpenType layout table while it is being built. - - Args: - f: A file object. - font (TTFont): A font object. - tableTag (string): If provided, asserts that the file contains data for the - given OpenType table. - - Returns: - An object representing the table. (e.g. ``table_G_S_U_B_``) - """ - lines = Tokenizer(f) - return parseTable(lines, font, tableTag=tableTag) - - -def main(args=None, font=None): - """Convert a FontDame OTL file to TTX XML - - Writes XML output to stdout. - - Args: - args: Command line arguments (``--font``, ``--table``, input files). - """ - import sys - from fontTools import configLogger - from fontTools.misc.testTools import MockFont - - if args is None: - args = sys.argv[1:] - - # configure the library logger (for >= WARNING) - configLogger() - # comment this out to enable debug messages from mtiLib's logger - # log.setLevel(logging.DEBUG) - - import argparse - - parser = argparse.ArgumentParser( - "fonttools mtiLib", - description=main.__doc__, - ) - - parser.add_argument( - "--font", - "-f", - metavar="FILE", - dest="font", - help="Input TTF files (used for glyph classes and sorting coverage tables)", - ) - parser.add_argument( - "--table", - "-t", - metavar="TABLE", - dest="tableTag", - help="Table to fill (sniffed from input file if not provided)", - ) - parser.add_argument( - "inputs", metavar="FILE", type=str, nargs="+", help="Input FontDame .txt files" - ) - - args = parser.parse_args(args) - - if font is None: - if args.font: - font = ttLib.TTFont(args.font) - else: - font = MockFont() - - for f in args.inputs: - log.debug("Processing %s", f) - with open(f, "rt", encoding="utf-8") as f: - table = build(f, font, tableTag=args.tableTag) - blob = table.compile(font) # Make sure it compiles - decompiled = table.__class__() - decompiled.decompile(blob, font) # Make sure it decompiles! - - # continue - from fontTools.misc import xmlWriter - - tag = table.tableTag - writer = xmlWriter.XMLWriter(sys.stdout) - writer.begintag(tag) - writer.newline() - # table.toXML(writer, font) - decompiled.toXML(writer, font) - writer.endtag(tag) - writer.newline() - - -if __name__ == "__main__": - import sys - - sys.exit(main()) diff --git a/spaces/johnyang/ChatPaper111/base_class.py b/spaces/johnyang/ChatPaper111/base_class.py deleted file mode 100644 index 6cfb5f5cd39e69f8ecf9e5071242a25352fb30a3..0000000000000000000000000000000000000000 --- a/spaces/johnyang/ChatPaper111/base_class.py +++ /dev/null @@ -1,106 +0,0 @@ -import abc -import pandas as pd -import pickle - - -class SimilarityAlg(metaclass=abc.ABCMeta): - """Similarity Algorithm to compute similarity between query_embedding and embeddings""" - - def __init__(self) -> None: - pass - - @abc.abstractmethod - def __call__(self, query_embedding, embeddings) -> None: - pass - - -class Embedding_Model(metaclass=abc.ABCMeta): - """Embedding Model to compute embedding of a text""" - - def __init__(self, model_name) -> None: - """Initialize the embedding model""" - embedding_cache_path = f"/app/ckpt/embedding_cache_{model_name}.pkl" - self.embedding_cache_path = embedding_cache_path - - # load the cache if it exists, and save a copy to disk - try: - embedding_cache = pd.read_pickle(embedding_cache_path) - except FileNotFoundError: - embedding_cache = {} - with open(embedding_cache_path, "wb") as embedding_cache_file: - pickle.dump(embedding_cache, embedding_cache_file) - self.embedding_cache = embedding_cache - self.model_name = model_name - - @abc.abstractmethod - def __call__(self, text) -> None: - """Compute the embedding of the text""" - pass - - -class AbstractPDFParser(metaclass=abc.ABCMeta): - """ PDF parser to parse a PDF file""" - - def __init__(self, db_name) -> None: - """Initialize the pdf database""" - db_cache_path = f"/app/ckpt/pdf_parser_{db_name}.pkl" - self.db_cache_path = db_cache_path - - # load the cache if it exists, and save a copy to disk - try: - db_cache = pd.read_pickle(db_cache_path) - except FileNotFoundError: - db_cache = {} - with open(db_cache_path, "wb") as cache_file: - pickle.dump(db_cache, cache_file) - self.db_cache = db_cache - self.db_name = db_name - - @abc.abstractmethod - def parse_pdf(self,) -> None: - """Parse the PDF file""" - pass - - @abc.abstractmethod - def _get_metadata(self, ) -> None: - """Get the metadata of the PDF file""" - pass - - def get_paragraphs(self, ) -> None: - """Get the paragraphs of the PDF file""" - pass - - @abc.abstractmethod - def get_split_paragraphs(self, ) -> None: - """ - Get the split paragraphs of the PDF file - Return: - split_paragraphs: dict of metadata and corresponding list of split paragraphs - """ - pass - - def _determine_metadata_of_paragraph(self, paragraph) -> None: - """ - Determine the metadata of a paragraph - Return: - metadata: metadata of the paragraph - """ - pass - - # @abc.abstractmethod - # def _determine_optimal_split_of_pargraphs(self, ) -> None: - # """ - # Determine the optimal split of paragraphs - # Return: - # split_paragraphs: dict of metadata and corresponding list of split paragraphs - # """ - # pass - - -class ChatbotEngine(metaclass=abc.ABCMeta): - def __init__(self,) -> None: - pass - - @abc.abstractmethod - def query(self, user_query): - pass diff --git a/spaces/jone/Music_Source_Separation/bytesep/plot_results/plot_vctk-musdb18.py b/spaces/jone/Music_Source_Separation/bytesep/plot_results/plot_vctk-musdb18.py deleted file mode 100644 index b7cc52af1e20b8f051bbe0dd8fd30c60a0dbf587..0000000000000000000000000000000000000000 --- a/spaces/jone/Music_Source_Separation/bytesep/plot_results/plot_vctk-musdb18.py +++ /dev/null @@ -1,87 +0,0 @@ -import os -import sys -import numpy as np -import argparse -import h5py -import math -import time -import logging -import pickle -import matplotlib.pyplot as plt - - -def load_sdrs(workspace, task_name, filename, config, gpus): - - stat_path = os.path.join( - workspace, - "statistics", - task_name, - filename, - "config={},gpus={}".format(config, gpus), - "statistics.pkl", - ) - - stat_dict = pickle.load(open(stat_path, 'rb')) - - median_sdrs = [e['sdr'] for e in stat_dict['test']] - - return median_sdrs - - -def plot_statistics(args): - - # arguments & parameters - workspace = args.workspace - select = args.select - task_name = "vctk-musdb18" - filename = "train" - - # paths - fig_path = os.path.join('results', task_name, "sdr_{}.pdf".format(select)) - os.makedirs(os.path.dirname(fig_path), exist_ok=True) - - linewidth = 1 - lines = [] - fig, ax = plt.subplots(1, 1, figsize=(8, 6)) - ylim = 30 - expand = 1 - - if select == '1a': - sdrs = load_sdrs(workspace, task_name, filename, config='unet', gpus=1) - (line,) = ax.plot(sdrs, label='UNet,l1_wav', linewidth=linewidth) - lines.append(line) - - else: - raise Exception('Error!') - - eval_every_iterations = 10000 - total_ticks = 50 - ticks_freq = 10 - - ax.set_ylim(0, ylim) - ax.set_xlim(0, total_ticks) - ax.xaxis.set_ticks(np.arange(0, total_ticks + 1, ticks_freq)) - ax.xaxis.set_ticklabels( - np.arange( - 0, - total_ticks * eval_every_iterations + 1, - ticks_freq * eval_every_iterations, - ) - ) - ax.yaxis.set_ticks(np.arange(ylim + 1)) - ax.yaxis.set_ticklabels(np.arange(ylim + 1)) - ax.grid(color='b', linestyle='solid', linewidth=0.3) - plt.legend(handles=lines, loc=4) - - plt.savefig(fig_path) - print('Save figure to {}'.format(fig_path)) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--workspace', type=str, required=True) - parser.add_argument('--select', type=str, required=True) - - args = parser.parse_args() - - plot_statistics(args) diff --git a/spaces/jordonpeter01/ai-comic-factory/src/components/ui/badge.tsx b/spaces/jordonpeter01/ai-comic-factory/src/components/ui/badge.tsx deleted file mode 100644 index 8a05c5e844f6551efb3b35a0a23c748a9a6639b4..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/ai-comic-factory/src/components/ui/badge.tsx +++ /dev/null @@ -1,36 +0,0 @@ -import * as React from "react" -import { cva, type VariantProps } from "class-variance-authority" - -import { cn } from "@/lib/utils" - -const badgeVariants = cva( - "inline-flex items-center rounded-full border border-stone-200 px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-stone-400 focus:ring-offset-2 dark:border-stone-800 dark:focus:ring-stone-800", - { - variants: { - variant: { - default: - "border-transparent bg-stone-900 text-stone-50 hover:bg-stone-900/80 dark:bg-stone-50 dark:text-stone-900 dark:hover:bg-stone-50/80", - secondary: - "border-transparent bg-stone-100 text-stone-900 hover:bg-stone-100/80 dark:bg-stone-800 dark:text-stone-50 dark:hover:bg-stone-800/80", - destructive: - "border-transparent bg-red-500 text-stone-50 hover:bg-red-500/80 dark:bg-red-900 dark:text-red-50 dark:hover:bg-red-900/80", - outline: "text-stone-950 dark:text-stone-50", - }, - }, - defaultVariants: { - variant: "default", - }, - } -) - -export interface BadgeProps - extends React.HTMLAttributes, - VariantProps {} - -function Badge({ className, variant, ...props }: BadgeProps) { - return ( -
- ) -} - -export { Badge, badgeVariants } diff --git a/spaces/jroust/prompthero-openjourney/app.py b/spaces/jroust/prompthero-openjourney/app.py deleted file mode 100644 index 2193905172b6fb6d868bff88cc8311f491ec13b3..0000000000000000000000000000000000000000 --- a/spaces/jroust/prompthero-openjourney/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/prompthero/openjourney").launch() \ No newline at end of file diff --git a/spaces/justest/mdn-chatbot/src/app/page.tsx b/spaces/justest/mdn-chatbot/src/app/page.tsx deleted file mode 100644 index 3f3b0866b21a4a95c0a2a53adb4e77d3d3e4a662..0000000000000000000000000000000000000000 --- a/spaces/justest/mdn-chatbot/src/app/page.tsx +++ /dev/null @@ -1,113 +0,0 @@ -import Image from 'next/image' - -export default function Home() { - return ( -
-
-

- Get started by editing  - src/app/page.tsx -

- -
- -
- Next.js Logo -
- - -
- ) -} diff --git a/spaces/kartik016/aadharORPanClassifier/README.md b/spaces/kartik016/aadharORPanClassifier/README.md deleted file mode 100644 index 04a5d6de8a3222fdc8ce6287f7c696fbceb33c33..0000000000000000000000000000000000000000 --- a/spaces/kartik016/aadharORPanClassifier/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AadharORPanClassifier -emoji: 😻 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/katanaml-org/sparrow-ui/toolbar_main/index.html b/spaces/katanaml-org/sparrow-ui/toolbar_main/index.html deleted file mode 100644 index 6b7306fb725dae1cb18202326a77e48255968a20..0000000000000000000000000000000000000000 --- a/spaces/katanaml-org/sparrow-ui/toolbar_main/index.html +++ /dev/null @@ -1,149 +0,0 @@ - - - - - - - - -
- - - -
- - - - - \ No newline at end of file diff --git a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/README.md b/spaces/kevinwang676/ChatGLM2-VC-SadTalker/README.md deleted file mode 100644 index d75993ec4ac79d7a91e7d122518dda1ceac7e02e..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChatGLM2-VC-SadTalker -emoji: 📺 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: true -license: mit ---- - - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/face3d/models/arcface_torch/README.md b/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/face3d/models/arcface_torch/README.md deleted file mode 100644 index 2ee63a861229b68873561fa39bfa7c9a8b53b947..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/face3d/models/arcface_torch/README.md +++ /dev/null @@ -1,164 +0,0 @@ -# Distributed Arcface Training in Pytorch - -This is a deep learning library that makes face recognition efficient, and effective, which can train tens of millions -identity on a single server. - -## Requirements - -- Install [pytorch](http://pytorch.org) (torch>=1.6.0), our doc for [install.md](docs/install.md). -- `pip install -r requirements.txt`. -- Download the dataset - from [https://github.com/deepinsight/insightface/tree/master/recognition/_datasets_](https://github.com/deepinsight/insightface/tree/master/recognition/_datasets_) - . - -## How to Training - -To train a model, run `train.py` with the path to the configs: - -### 1. Single node, 8 GPUs: - -```shell -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/ms1mv3_r50 -``` - -### 2. Multiple nodes, each node 8 GPUs: - -Node 0: - -```shell -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=0 --master_addr="ip1" --master_port=1234 train.py train.py configs/ms1mv3_r50 -``` - -Node 1: - -```shell -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=2 --node_rank=1 --master_addr="ip1" --master_port=1234 train.py train.py configs/ms1mv3_r50 -``` - -### 3.Training resnet2060 with 8 GPUs: - -```shell -python -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 --master_addr="127.0.0.1" --master_port=1234 train.py configs/ms1mv3_r2060.py -``` - -## Model Zoo - -- The models are available for non-commercial research purposes only. -- All models can be found in here. -- [Baidu Yun Pan](https://pan.baidu.com/s/1CL-l4zWqsI1oDuEEYVhj-g): e8pw -- [onedrive](https://1drv.ms/u/s!AswpsDO2toNKq0lWY69vN58GR6mw?e=p9Ov5d) - -### Performance on [**ICCV2021-MFR**](http://iccv21-mfr.com/) - -ICCV2021-MFR testset consists of non-celebrities so we can ensure that it has very few overlap with public available face -recognition training set, such as MS1M and CASIA as they mostly collected from online celebrities. -As the result, we can evaluate the FAIR performance for different algorithms. - -For **ICCV2021-MFR-ALL** set, TAR is measured on all-to-all 1:1 protocal, with FAR less than 0.000001(e-6). The -globalised multi-racial testset contains 242,143 identities and 1,624,305 images. - -For **ICCV2021-MFR-MASK** set, TAR is measured on mask-to-nonmask 1:1 protocal, with FAR less than 0.0001(e-4). -Mask testset contains 6,964 identities, 6,964 masked images and 13,928 non-masked images. -There are totally 13,928 positive pairs and 96,983,824 negative pairs. - -| Datasets | backbone | Training throughout | Size / MB | **ICCV2021-MFR-MASK** | **ICCV2021-MFR-ALL** | -| :---: | :--- | :--- | :--- |:--- |:--- | -| MS1MV3 | r18 | - | 91 | **47.85** | **68.33** | -| Glint360k | r18 | 8536 | 91 | **53.32** | **72.07** | -| MS1MV3 | r34 | - | 130 | **58.72** | **77.36** | -| Glint360k | r34 | 6344 | 130 | **65.10** | **83.02** | -| MS1MV3 | r50 | 5500 | 166 | **63.85** | **80.53** | -| Glint360k | r50 | 5136 | 166 | **70.23** | **87.08** | -| MS1MV3 | r100 | - | 248 | **69.09** | **84.31** | -| Glint360k | r100 | 3332 | 248 | **75.57** | **90.66** | -| MS1MV3 | mobilefacenet | 12185 | 7.8 | **41.52** | **65.26** | -| Glint360k | mobilefacenet | 11197 | 7.8 | **44.52** | **66.48** | - -### Performance on IJB-C and Verification Datasets - -| Datasets | backbone | IJBC(1e-05) | IJBC(1e-04) | agedb30 | cfp_fp | lfw | log | -| :---: | :--- | :--- | :--- | :--- |:--- |:--- |:--- | -| MS1MV3 | r18 | 92.07 | 94.66 | 97.77 | 97.73 | 99.77 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r18_fp16/training.log)| -| MS1MV3 | r34 | 94.10 | 95.90 | 98.10 | 98.67 | 99.80 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r34_fp16/training.log)| -| MS1MV3 | r50 | 94.79 | 96.46 | 98.35 | 98.96 | 99.83 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r50_fp16/training.log)| -| MS1MV3 | r100 | 95.31 | 96.81 | 98.48 | 99.06 | 99.85 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r100_fp16/training.log)| -| MS1MV3 | **r2060**| 95.34 | 97.11 | 98.67 | 99.24 | 99.87 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/ms1mv3_arcface_r2060_fp16/training.log)| -| Glint360k |r18-0.1 | 93.16 | 95.33 | 97.72 | 97.73 | 99.77 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r18_fp16_0.1/training.log)| -| Glint360k |r34-0.1 | 95.16 | 96.56 | 98.33 | 98.78 | 99.82 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r34_fp16_0.1/training.log)| -| Glint360k |r50-0.1 | 95.61 | 96.97 | 98.38 | 99.20 | 99.83 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r50_fp16_0.1/training.log)| -| Glint360k |r100-0.1 | 95.88 | 97.32 | 98.48 | 99.29 | 99.82 |[log](https://raw.githubusercontent.com/anxiangsir/insightface_arcface_log/master/glint360k_cosface_r100_fp16_0.1/training.log)| - -[comment]: <> (More details see [model.md](docs/modelzoo.md) in docs.) - - -## [Speed Benchmark](docs/speed_benchmark.md) - -**Arcface Torch** can train large-scale face recognition training set efficiently and quickly. When the number of -classes in training sets is greater than 300K and the training is sufficient, partial fc sampling strategy will get same -accuracy with several times faster training performance and smaller GPU memory. -Partial FC is a sparse variant of the model parallel architecture for large sacle face recognition. Partial FC use a -sparse softmax, where each batch dynamicly sample a subset of class centers for training. In each iteration, only a -sparse part of the parameters will be updated, which can reduce a lot of GPU memory and calculations. With Partial FC, -we can scale trainset of 29 millions identities, the largest to date. Partial FC also supports multi-machine distributed -training and mixed precision training. - -![Image text](https://github.com/anxiangsir/insightface_arcface_log/blob/master/partial_fc_v2.png) - -More details see -[speed_benchmark.md](docs/speed_benchmark.md) in docs. - -### 1. Training speed of different parallel methods (samples / second), Tesla V100 32GB * 8. (Larger is better) - -`-` means training failed because of gpu memory limitations. - -| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 | -| :--- | :--- | :--- | :--- | -|125000 | 4681 | 4824 | 5004 | -|1400000 | **1672** | 3043 | 4738 | -|5500000 | **-** | **1389** | 3975 | -|8000000 | **-** | **-** | 3565 | -|16000000 | **-** | **-** | 2679 | -|29000000 | **-** | **-** | **1855** | - -### 2. GPU memory cost of different parallel methods (MB per GPU), Tesla V100 32GB * 8. (Smaller is better) - -| Number of Identities in Dataset | Data Parallel | Model Parallel | Partial FC 0.1 | -| :--- | :--- | :--- | :--- | -|125000 | 7358 | 5306 | 4868 | -|1400000 | 32252 | 11178 | 6056 | -|5500000 | **-** | 32188 | 9854 | -|8000000 | **-** | **-** | 12310 | -|16000000 | **-** | **-** | 19950 | -|29000000 | **-** | **-** | 32324 | - -## Evaluation ICCV2021-MFR and IJB-C - -More details see [eval.md](docs/eval.md) in docs. - -## Test - -We tested many versions of PyTorch. Please create an issue if you are having trouble. - -- [x] torch 1.6.0 -- [x] torch 1.7.1 -- [x] torch 1.8.0 -- [x] torch 1.9.0 - -## Citation - -``` -@inproceedings{deng2019arcface, - title={Arcface: Additive angular margin loss for deep face recognition}, - author={Deng, Jiankang and Guo, Jia and Xue, Niannan and Zafeiriou, Stefanos}, - booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, - pages={4690--4699}, - year={2019} -} -@inproceedings{an2020partical_fc, - title={Partial FC: Training 10 Million Identities on a Single Machine}, - author={An, Xiang and Zhu, Xuhan and Xiao, Yang and Wu, Lan and Zhang, Ming and Gao, Yuan and Qin, Bin and - Zhang, Debing and Fu Ying}, - booktitle={Arxiv 2010.05222}, - year={2020} -} -``` diff --git a/spaces/kevinwang676/Voice-Cloning-for-YouTube/README.md b/spaces/kevinwang676/Voice-Cloning-for-YouTube/README.md deleted file mode 100644 index bbaa1ca3564258399c6c3aa442bf17c650ce5ada..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Voice-Cloning-for-YouTube/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Voice Cloning -emoji: 😻 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: merve/voice-cloning ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/schedules/schedule_80k.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/schedules/schedule_80k.py deleted file mode 100644 index c190cee6bdc7922b688ea75dc8f152fa15c24617..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/configs/_base_/schedules/schedule_80k.py +++ /dev/null @@ -1,9 +0,0 @@ -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005) -optimizer_config = dict() -# learning policy -lr_config = dict(policy='poly', power=0.9, min_lr=1e-4, by_epoch=False) -# runtime settings -runner = dict(type='IterBasedRunner', max_iters=80000) -checkpoint_config = dict(by_epoch=False, interval=8000) -evaluation = dict(interval=8000, metric='mIoU') diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/__init__.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/__init__.py deleted file mode 100644 index 9b9d3d5b3fe80247642d962edd6fb787537d01d6..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .fpn import FPN -from .multilevel_neck import MultiLevelNeck - -__all__ = ['FPN', 'MultiLevelNeck'] diff --git a/spaces/kokofixcomputers/chat-ui/src/routes/conversation/[id]/message/[messageId]/prompt/+server.ts b/spaces/kokofixcomputers/chat-ui/src/routes/conversation/[id]/message/[messageId]/prompt/+server.ts deleted file mode 100644 index 24c0067ede1b4118a1bdc7b05135269f2e941b48..0000000000000000000000000000000000000000 --- a/spaces/kokofixcomputers/chat-ui/src/routes/conversation/[id]/message/[messageId]/prompt/+server.ts +++ /dev/null @@ -1,52 +0,0 @@ -import { buildPrompt } from "$lib/buildPrompt"; -import { authCondition } from "$lib/server/auth"; -import { collections } from "$lib/server/database"; -import { models } from "$lib/server/models"; -import { error } from "@sveltejs/kit"; -import { ObjectId } from "mongodb"; - -export async function GET({ params, locals }) { - const convId = new ObjectId(params.id); - - const conv = await collections.conversations.findOne({ - _id: convId, - ...authCondition(locals), - }); - - if (!conv) { - throw error(404, "Conversation not found"); - } - - const messageId = params.messageId; - - const messageIndex = conv.messages.findIndex((msg) => msg.id === messageId); - - if (messageIndex === -1) { - throw error(404, "Message not found"); - } - - const model = models.find((m) => m.id === conv.model); - - if (!model) { - throw error(404, "Conversation model not found"); - } - - const prompt = buildPrompt(conv.messages.slice(0, messageIndex + 1), model); - - return new Response( - JSON.stringify( - { - note: "This is a preview of the prompt that will be sent to the model when retrying the message. It may differ from what was sent in the past if the parameters have been updated since", - prompt, - model: model.name, - parameters: { - ...model.parameters, - return_full_text: false, - }, - }, - null, - 2 - ), - { headers: { "Content-Type": "application/json" } } - ); -} diff --git a/spaces/krrishD/Helsinki-NLP_opus-mt-de-en/app.py b/spaces/krrishD/Helsinki-NLP_opus-mt-de-en/app.py deleted file mode 100644 index 0c8cc302bb8d96d3beca317509855bb741a5a575..0000000000000000000000000000000000000000 --- a/spaces/krrishD/Helsinki-NLP_opus-mt-de-en/app.py +++ /dev/null @@ -1,27 +0,0 @@ -import gradio as gr -from transformers import pipeline - -pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-de-en") - -def main(in_text): - print(in_text) - answer = pipe(in_text) - print(answer) - return answer[0]["translation_text"] - -with gr.Blocks() as demo: - gr.Markdown("""# Translation Engine!""") - with gr.Row(): - with gr.Column(): - text1 = gr.Textbox( - label="Input Text", - lines=1, - ) - output = gr.Textbox(label="Output Text") - b1 = gr.Button("Translate!") - b1.click(main, inputs=[text1], outputs=output) - gr.Markdown("""#### powered by [Tassle](https://bit.ly/3LXMklV)""") - - -if __name__ == "__main__": - demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/kukuhtw/VToonify/vtoonify/model/encoder/encoders/psp_encoders.py b/spaces/kukuhtw/VToonify/vtoonify/model/encoder/encoders/psp_encoders.py deleted file mode 100644 index f69d38200b6be4997673ae38ed481fd21f88b419..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/VToonify/vtoonify/model/encoder/encoders/psp_encoders.py +++ /dev/null @@ -1,186 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn -from torch.nn import Linear, Conv2d, BatchNorm2d, PReLU, Sequential, Module - -from model.encoder.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE -from model.stylegan.model import EqualLinear - - -class GradualStyleBlock(Module): - def __init__(self, in_c, out_c, spatial): - super(GradualStyleBlock, self).__init__() - self.out_c = out_c - self.spatial = spatial - num_pools = int(np.log2(spatial)) - modules = [] - modules += [Conv2d(in_c, out_c, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU()] - for i in range(num_pools - 1): - modules += [ - Conv2d(out_c, out_c, kernel_size=3, stride=2, padding=1), - nn.LeakyReLU() - ] - self.convs = nn.Sequential(*modules) - self.linear = EqualLinear(out_c, out_c, lr_mul=1) - - def forward(self, x): - x = self.convs(x) - x = x.view(-1, self.out_c) - x = self.linear(x) - return x - - -class GradualStyleEncoder(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(GradualStyleEncoder, self).__init__() - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(opts.input_nc, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - self.styles = nn.ModuleList() - self.style_count = opts.n_styles - self.coarse_ind = 3 - self.middle_ind = 7 - for i in range(self.style_count): - if i < self.coarse_ind: - style = GradualStyleBlock(512, 512, 16) - elif i < self.middle_ind: - style = GradualStyleBlock(512, 512, 32) - else: - style = GradualStyleBlock(512, 512, 64) - self.styles.append(style) - self.latlayer1 = nn.Conv2d(256, 512, kernel_size=1, stride=1, padding=0) - self.latlayer2 = nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0) - - def _upsample_add(self, x, y): - '''Upsample and add two feature maps. - Args: - x: (Variable) top feature map to be upsampled. - y: (Variable) lateral feature map. - Returns: - (Variable) added feature map. - Note in PyTorch, when input size is odd, the upsampled feature map - with `F.upsample(..., scale_factor=2, mode='nearest')` - maybe not equal to the lateral feature map size. - e.g. - original input size: [N,_,15,15] -> - conv2d feature map size: [N,_,8,8] -> - upsampled feature map size: [N,_,16,16] - So we choose bilinear upsample which supports arbitrary output sizes. - ''' - _, _, H, W = y.size() - return F.interpolate(x, size=(H, W), mode='bilinear', align_corners=True) + y - - def forward(self, x): - x = self.input_layer(x) - - latents = [] - modulelist = list(self.body._modules.values()) - for i, l in enumerate(modulelist): - x = l(x) - if i == 6: - c1 = x - elif i == 20: - c2 = x - elif i == 23: - c3 = x - - for j in range(self.coarse_ind): - latents.append(self.styles[j](c3)) - - p2 = self._upsample_add(c3, self.latlayer1(c2)) - for j in range(self.coarse_ind, self.middle_ind): - latents.append(self.styles[j](p2)) - - p1 = self._upsample_add(p2, self.latlayer2(c1)) - for j in range(self.middle_ind, self.style_count): - latents.append(self.styles[j](p1)) - - out = torch.stack(latents, dim=1) - return out - - -class BackboneEncoderUsingLastLayerIntoW(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(BackboneEncoderUsingLastLayerIntoW, self).__init__() - print('Using BackboneEncoderUsingLastLayerIntoW') - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(opts.input_nc, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - self.output_pool = torch.nn.AdaptiveAvgPool2d((1, 1)) - self.linear = EqualLinear(512, 512, lr_mul=1) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_pool(x) - x = x.view(-1, 512) - x = self.linear(x) - return x - - -class BackboneEncoderUsingLastLayerIntoWPlus(Module): - def __init__(self, num_layers, mode='ir', opts=None): - super(BackboneEncoderUsingLastLayerIntoWPlus, self).__init__() - print('Using BackboneEncoderUsingLastLayerIntoWPlus') - assert num_layers in [50, 100, 152], 'num_layers should be 50,100, or 152' - assert mode in ['ir', 'ir_se'], 'mode should be ir or ir_se' - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.n_styles = opts.n_styles - self.input_layer = Sequential(Conv2d(opts.input_nc, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - self.output_layer_2 = Sequential(BatchNorm2d(512), - torch.nn.AdaptiveAvgPool2d((7, 7)), - Flatten(), - Linear(512 * 7 * 7, 512)) - self.linear = EqualLinear(512, 512 * self.n_styles, lr_mul=1) - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer_2(x) - x = self.linear(x) - x = x.view(-1, self.n_styles, 512) - return x diff --git a/spaces/kurianbenoy/audioclassification/README.md b/spaces/kurianbenoy/audioclassification/README.md deleted file mode 100644 index a2c991f84d061676a1fa3e58b27c21d80ccd0f8a..0000000000000000000000000000000000000000 --- a/spaces/kurianbenoy/audioclassification/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Audioclassification -emoji: 💻 -colorFrom: gray -colorTo: indigo -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_D_.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_D_.py deleted file mode 100644 index 536ff2f98a0abb8b27fe6da44199534a32fd0c3e..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_D_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .T_S_I_V_ import table_T_S_I_V_ - - -class table_T_S_I_D_(table_T_S_I_V_): - pass diff --git a/spaces/laiyer/llm-guard-playground/output.py b/spaces/laiyer/llm-guard-playground/output.py deleted file mode 100644 index dbcad2d3256d68d0beb33974a2c9b4b23f532331..0000000000000000000000000000000000000000 --- a/spaces/laiyer/llm-guard-playground/output.py +++ /dev/null @@ -1,552 +0,0 @@ -import logging -import time -from datetime import timedelta -from typing import Dict, List - -import streamlit as st -from llm_guard.input_scanners.anonymize import default_entity_types -from llm_guard.output_scanners import ( - JSON, - BanSubstrings, - BanTopics, - Bias, - Code, - Deanonymize, - FactualConsistency, - Language, - LanguageSame, - MaliciousURLs, - NoRefusal, - Regex, - Relevance, - Sensitive, -) -from llm_guard.output_scanners.relevance import all_models as relevance_models -from llm_guard.output_scanners.sentiment import Sentiment -from llm_guard.output_scanners.toxicity import Toxicity -from llm_guard.vault import Vault -from streamlit_tags import st_tags - -logger = logging.getLogger("llm-guard-playground") - - -def init_settings() -> (List, Dict): - all_scanners = [ - "BanSubstrings", - "BanTopics", - "Bias", - "Code", - "Deanonymize", - "JSON", - "Language", - "LanguageSame", - "MaliciousURLs", - "NoRefusal", - "FactualConsistency", - "Regex", - "Relevance", - "Sensitive", - "Sentiment", - "Toxicity", - ] - - st_enabled_scanners = st.sidebar.multiselect( - "Select scanners", - options=all_scanners, - default=all_scanners, - help="The list can be found here: https://laiyer-ai.github.io/llm-guard/output_scanners/bias/", - ) - - settings = {} - - if "BanSubstrings" in st_enabled_scanners: - st_bs_expander = st.sidebar.expander( - "Ban Substrings", - expanded=False, - ) - - with st_bs_expander: - st_bs_substrings = st.text_area( - "Enter substrings to ban (one per line)", - value="test\nhello\nworld\n", - height=200, - ).split("\n") - - st_bs_match_type = st.selectbox("Match type", ["str", "word"]) - st_bs_case_sensitive = st.checkbox("Case sensitive", value=False) - st_bs_redact = st.checkbox("Redact", value=False) - st_bs_contains_all = st.checkbox("Contains all", value=False) - - settings["BanSubstrings"] = { - "substrings": st_bs_substrings, - "match_type": st_bs_match_type, - "case_sensitive": st_bs_case_sensitive, - "redact": st_bs_redact, - "contains_all": st_bs_contains_all, - } - - if "BanTopics" in st_enabled_scanners: - st_bt_expander = st.sidebar.expander( - "Ban Topics", - expanded=False, - ) - - with st_bt_expander: - st_bt_topics = st_tags( - label="List of topics", - text="Type and press enter", - value=["violence"], - suggestions=[], - maxtags=30, - key="bt_topics", - ) - - st_bt_threshold = st.slider( - label="Threshold", - value=0.6, - min_value=0.0, - max_value=1.0, - step=0.05, - key="ban_topics_threshold", - ) - - settings["BanTopics"] = {"topics": st_bt_topics, "threshold": st_bt_threshold} - - if "Bias" in st_enabled_scanners: - st_bias_expander = st.sidebar.expander( - "Bias", - expanded=False, - ) - - with st_bias_expander: - st_bias_threshold = st.slider( - label="Threshold", - value=0.75, - min_value=0.0, - max_value=1.0, - step=0.05, - key="bias_threshold", - ) - - settings["Bias"] = {"threshold": st_bias_threshold} - - if "Code" in st_enabled_scanners: - st_cd_expander = st.sidebar.expander( - "Code", - expanded=False, - ) - - with st_cd_expander: - st_cd_languages = st.multiselect( - "Programming languages", - options=["python", "java", "javascript", "go", "php", "ruby"], - default=["python"], - ) - - st_cd_mode = st.selectbox("Mode", ["allowed", "denied"], index=0) - - settings["Code"] = {"languages": st_cd_languages, "mode": st_cd_mode} - - if "JSON" in st_enabled_scanners: - st_json_expander = st.sidebar.expander( - "JSON", - expanded=False, - ) - - with st_json_expander: - st_json_required_elements = st.slider( - label="Required elements", - value=0, - min_value=0, - max_value=10, - step=1, - key="json_required_elements", - help="The minimum number of JSON elements that should be present", - ) - - st_json_repair = st.checkbox("Repair", value=False, help="Attempt to repair the JSON") - - settings["JSON"] = { - "required_elements": st_json_required_elements, - "repair": st_json_repair, - } - - if "Language" in st_enabled_scanners: - st_lan_expander = st.sidebar.expander( - "Language", - expanded=False, - ) - - with st_lan_expander: - st_lan_valid_language = st.multiselect( - "Languages", - [ - "af", - "ar", - "bg", - "bn", - "ca", - "cs", - "cy", - "da", - "de", - "el", - "en", - "es", - "et", - "fa", - "fi", - "fr", - "gu", - "he", - "hi", - "hr", - "hu", - "id", - "it", - "ja", - "kn", - "ko", - "lt", - "lv", - "mk", - "ml", - "mr", - "ne", - "nl", - "no", - "pa", - "pl", - "pt", - "ro", - "ru", - "sk", - "sl", - "so", - "sq", - "sv", - "sw", - "ta", - "te", - "th", - "tl", - "tr", - "uk", - "ur", - "vi", - "zh-cn", - "zh-tw", - ], - default=["en"], - ) - - settings["Language"] = { - "valid_languages": st_lan_valid_language, - } - - if "MaliciousURLs" in st_enabled_scanners: - st_murls_expander = st.sidebar.expander( - "Malicious URLs", - expanded=False, - ) - - with st_murls_expander: - st_murls_threshold = st.slider( - label="Threshold", - value=0.75, - min_value=0.0, - max_value=1.0, - step=0.05, - key="murls_threshold", - ) - - settings["MaliciousURLs"] = {"threshold": st_murls_threshold} - - if "NoRefusal" in st_enabled_scanners: - st_no_ref_expander = st.sidebar.expander( - "No refusal", - expanded=False, - ) - - with st_no_ref_expander: - st_no_ref_threshold = st.slider( - label="Threshold", - value=0.5, - min_value=0.0, - max_value=1.0, - step=0.05, - key="no_ref_threshold", - ) - - settings["NoRefusal"] = {"threshold": st_no_ref_threshold} - - if "FactualConsistency" in st_enabled_scanners: - st_fc_expander = st.sidebar.expander( - "FactualConsistency", - expanded=False, - ) - - with st_fc_expander: - st_fc_minimum_score = st.slider( - label="Minimum score", - value=0.5, - min_value=0.0, - max_value=1.0, - step=0.05, - key="fc_threshold", - ) - - settings["FactualConsistency"] = {"minimum_score": st_fc_minimum_score} - - if "Regex" in st_enabled_scanners: - st_regex_expander = st.sidebar.expander( - "Regex", - expanded=False, - ) - - with st_regex_expander: - st_regex_patterns = st.text_area( - "Enter patterns to ban (one per line)", - value="Bearer [A-Za-z0-9-._~+/]+", - height=200, - ).split("\n") - - st_regex_type = st.selectbox( - "Match type", - ["good", "bad"], - index=1, - help="good: allow only good patterns, bad: ban bad patterns", - ) - - st_redact = st.checkbox( - "Redact", value=False, help="Replace the matched bad patterns with [REDACTED]" - ) - - settings["Regex"] = { - "patterns": st_regex_patterns, - "type": st_regex_type, - "redact": st_redact, - } - - if "Relevance" in st_enabled_scanners: - st_rele_expander = st.sidebar.expander( - "Relevance", - expanded=False, - ) - - with st_rele_expander: - st_rele_threshold = st.slider( - label="Threshold", - value=0.5, - min_value=0.0, - max_value=1.0, - step=0.05, - key="rele_threshold", - ) - - st_rele_model = st.selectbox("Embeddings model", relevance_models, index=1) - - settings["Relevance"] = {"threshold": st_rele_threshold, "model": st_rele_model} - - if "Sensitive" in st_enabled_scanners: - st_sens_expander = st.sidebar.expander( - "Sensitive", - expanded=False, - ) - - with st_sens_expander: - st_sens_entity_types = st_tags( - label="Sensitive entities", - text="Type and press enter", - value=default_entity_types, - suggestions=default_entity_types - + ["DATE_TIME", "NRP", "LOCATION", "MEDICAL_LICENSE", "US_PASSPORT"], - maxtags=30, - key="sensitive_entity_types", - ) - st.caption( - "Check all supported entities: https://llm-guard.com/input_scanners/anonymize/" - ) - st_sens_redact = st.checkbox("Redact", value=False, key="sens_redact") - st_sens_threshold = st.slider( - label="Threshold", - value=0.0, - min_value=0.0, - max_value=1.0, - step=0.1, - key="sens_threshold", - ) - - settings["Sensitive"] = { - "entity_types": st_sens_entity_types, - "redact": st_sens_redact, - "threshold": st_sens_threshold, - } - - if "Sentiment" in st_enabled_scanners: - st_sent_expander = st.sidebar.expander( - "Sentiment", - expanded=False, - ) - - with st_sent_expander: - st_sent_threshold = st.slider( - label="Threshold", - value=-0.1, - min_value=-1.0, - max_value=1.0, - step=0.1, - key="sentiment_threshold", - help="Negative values are negative sentiment, positive values are positive sentiment", - ) - - settings["Sentiment"] = {"threshold": st_sent_threshold} - - if "Toxicity" in st_enabled_scanners: - st_tox_expander = st.sidebar.expander( - "Toxicity", - expanded=False, - ) - - with st_tox_expander: - st_tox_threshold = st.slider( - label="Threshold", - value=0.0, - min_value=-1.0, - max_value=1.0, - step=0.05, - key="toxicity_threshold", - help="A negative value (closer to 0 as the label output) indicates toxicity in the text, while a positive logit (closer to 1 as the label output) suggests non-toxicity.", - ) - - settings["Toxicity"] = {"threshold": st_tox_threshold} - - return st_enabled_scanners, settings - - -def get_scanner(scanner_name: str, vault: Vault, settings: Dict): - logger.debug(f"Initializing {scanner_name} scanner") - - if scanner_name == "BanSubstrings": - return BanSubstrings( - substrings=settings["substrings"], - match_type=settings["match_type"], - case_sensitive=settings["case_sensitive"], - redact=settings["redact"], - contains_all=settings["contains_all"], - ) - - if scanner_name == "BanTopics": - return BanTopics(topics=settings["topics"], threshold=settings["threshold"]) - - if scanner_name == "Bias": - return Bias(threshold=settings["threshold"], use_onnx=True) - - if scanner_name == "Deanonymize": - return Deanonymize(vault=vault) - - if scanner_name == "JSON": - return JSON(required_elements=settings["required_elements"], repair=settings["repair"]) - - if scanner_name == "Language": - return Language(valid_languages=settings["valid_languages"]) - - if scanner_name == "LanguageSame": - return LanguageSame() - - if scanner_name == "Code": - mode = settings["mode"] - - allowed_languages = None - denied_languages = None - if mode == "allowed": - allowed_languages = settings["languages"] - elif mode == "denied": - denied_languages = settings["languages"] - - return Code(allowed=allowed_languages, denied=denied_languages, use_onnx=True) - - if scanner_name == "MaliciousURLs": - return MaliciousURLs(threshold=settings["threshold"], use_onnx=True) - - if scanner_name == "NoRefusal": - return NoRefusal(threshold=settings["threshold"]) - - if scanner_name == "FactualConsistency": - return FactualConsistency(minimum_score=settings["minimum_score"]) - - if scanner_name == "Regex": - match_type = settings["type"] - - good_patterns = None - bad_patterns = None - if match_type == "good": - good_patterns = settings["patterns"] - elif match_type == "bad": - bad_patterns = settings["patterns"] - - return Regex( - good_patterns=good_patterns, bad_patterns=bad_patterns, redact=settings["redact"] - ) - - if scanner_name == "Relevance": - return Relevance(threshold=settings["threshold"], model=settings["model"]) - - if scanner_name == "Sensitive": - return Sensitive( - entity_types=settings["entity_types"], - redact=settings["redact"], - threshold=settings["threshold"], - use_onnx=True, - ) - - if scanner_name == "Sentiment": - return Sentiment(threshold=settings["threshold"]) - - if scanner_name == "Toxicity": - return Toxicity(threshold=settings["threshold"], use_onnx=True) - - raise ValueError("Unknown scanner name") - - -def scan( - vault: Vault, - enabled_scanners: List[str], - settings: Dict, - prompt: str, - text: str, - fail_fast: bool = False, -) -> (str, List[Dict[str, any]]): - sanitized_output = text - results = [] - - status_text = "Scanning prompt..." - if fail_fast: - status_text = "Scanning prompt (fail fast mode)..." - - with st.status(status_text, expanded=True) as status: - for scanner_name in enabled_scanners: - st.write(f"{scanner_name} scanner...") - scanner = get_scanner( - scanner_name, vault, settings[scanner_name] if scanner_name in settings else {} - ) - - start_time = time.monotonic() - sanitized_output, is_valid, risk_score = scanner.scan(prompt, sanitized_output) - end_time = time.monotonic() - - results.append( - { - "scanner": scanner_name, - "is_valid": is_valid, - "risk_score": risk_score, - "took_sec": round(timedelta(seconds=end_time - start_time).total_seconds(), 2), - } - ) - - if fail_fast and not is_valid: - break - - status.update(label="Scanning complete", state="complete", expanded=False) - - return sanitized_output, results diff --git a/spaces/lambdalabs/LambdaSuperRes/KAIR/README.md b/spaces/lambdalabs/LambdaSuperRes/KAIR/README.md deleted file mode 100644 index 8dd33fabc499cf4287c6deaed49f9c6b04709241..0000000000000000000000000000000000000000 --- a/spaces/lambdalabs/LambdaSuperRes/KAIR/README.md +++ /dev/null @@ -1,343 +0,0 @@ -## Training and testing codes for USRNet, DnCNN, FFDNet, SRMD, DPSR, MSRResNet, ESRGAN, BSRGAN, SwinIR, VRT -[![download](https://img.shields.io/github/downloads/cszn/KAIR/total.svg)](https://github.com/cszn/KAIR/releases) ![visitors](https://visitor-badge.glitch.me/badge?page_id=cszn/KAIR) - -[Kai Zhang](https://cszn.github.io/) - -*[Computer Vision Lab](https://vision.ee.ethz.ch/the-institute.html), ETH Zurich, Switzerland* - -_______ -- **_News (2022-02-15)_**: We release [the training codes](https://github.com/cszn/KAIR/blob/master/docs/README_VRT.md) of [VRT ![GitHub Stars](https://img.shields.io/github/stars/JingyunLiang/VRT?style=social)](https://github.com/JingyunLiang/VRT) for video SR, deblurring and denoising. -

- - - - - -

- -- **_News (2021-12-23)_**: Our techniques are adopted in [https://www.amemori.ai/](https://www.amemori.ai/). -- **_News (2021-12-23)_**: Our new work for practical image denoising. - -- -- [](https://imgsli.com/ODczMTc) -[](https://imgsli.com/ODczMTY) -- **_News (2021-09-09)_**: Add [main_download_pretrained_models.py](https://github.com/cszn/KAIR/blob/master/main_download_pretrained_models.py) to download pre-trained models. -- **_News (2021-09-08)_**: Add [matlab code](https://github.com/cszn/KAIR/tree/master/matlab) to zoom local part of an image for the purpose of comparison between different results. -- **_News (2021-09-07)_**: We upload [the training code](https://github.com/cszn/KAIR/blob/master/docs/README_SwinIR.md) of [SwinIR ![GitHub Stars](https://img.shields.io/github/stars/JingyunLiang/SwinIR?style=social)](https://github.com/JingyunLiang/SwinIR) and provide an [interactive online Colob demo for real-world image SR](https://colab.research.google.com/gist/JingyunLiang/a5e3e54bc9ef8d7bf594f6fee8208533/swinir-demo-on-real-world-image-sr.ipynb). Try to super-resolve your own images on Colab! google colab logo - -|Real-World Image (x4)|[BSRGAN, ICCV2021](https://github.com/cszn/BSRGAN)|[Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN)|SwinIR (ours)| -| :--- | :---: | :-----: | :-----: | -|||| -||||| - -- **_News (2021-08-31)_**: We upload the [training code of BSRGAN](https://github.com/cszn/BSRGAN#training). -- **_News (2021-08-24)_**: We upload the BSRGAN degradation model. -- **_News (2021-08-22)_**: Support multi-feature-layer VGG perceptual loss and UNet discriminator. -- **_News (2021-08-18)_**: We upload the extended BSRGAN degradation model. It is slightly different from our published version. - -- **_News (2021-06-03)_**: Add testing codes of [GPEN (CVPR21)](https://github.com/yangxy/GPEN) for face image enhancement: [main_test_face_enhancement.py](https://github.com/cszn/KAIR/blob/master/main_test_face_enhancement.py) - - - - - - - - - -- **_News (2021-05-13)_**: Add [PatchGAN discriminator](https://github.com/cszn/KAIR/blob/master/models/network_discriminator.py). - -- **_News (2021-05-12)_**: Support distributed training, see also [https://github.com/xinntao/BasicSR/blob/master/docs/TrainTest.md](https://github.com/xinntao/BasicSR/blob/master/docs/TrainTest.md). - -- **_News (2021-01)_**: [BSRGAN](https://github.com/cszn/BSRGAN) for blind real image super-resolution will be added. - -- **_Pull requests are welcome!_** - -- **Correction (2020-10)**: If you use multiple GPUs for GAN training, remove or comment [Line 105](https://github.com/cszn/KAIR/blob/e52a6944c6a40ba81b88430ffe38fd6517e0449e/models/model_gan.py#L105) to enable `DataParallel` for fast training - -- **News (2020-10)**: Add [utils_receptivefield.py](https://github.com/cszn/KAIR/blob/master/utils/utils_receptivefield.py) to calculate receptive field. - -- **News (2020-8)**: A `deep plug-and-play image restoration toolbox` is released at [cszn/DPIR](https://github.com/cszn/DPIR). - -- **Tips (2020-8)**: Use [this](https://github.com/cszn/KAIR/blob/9fd17abff001ab82a22070f7e442bb5246d2d844/main_challenge_sr.py#L147) to avoid `out of memory` issue. - -- **News (2020-7)**: Add [main_challenge_sr.py](https://github.com/cszn/KAIR/blob/23b0d0f717980e48fad02513ba14045d57264fe1/main_challenge_sr.py#L90) to get `FLOPs`, `#Params`, `Runtime`, `#Activations`, `#Conv`, and `Max Memory Allocated`. -```python -from utils.utils_modelsummary import get_model_activation, get_model_flops -input_dim = (3, 256, 256) # set the input dimension -activations, num_conv2d = get_model_activation(model, input_dim) -logger.info('{:>16s} : {:<.4f} [M]'.format('#Activations', activations/10**6)) -logger.info('{:>16s} : {:16s} : {:<.4f} [G]'.format('FLOPs', flops/10**9)) -num_parameters = sum(map(lambda x: x.numel(), model.parameters())) -logger.info('{:>16s} : {:<.4f} [M]'.format('#Params', num_parameters/10**6)) -``` - -- **News (2020-6)**: Add [USRNet (CVPR 2020)](https://github.com/cszn/USRNet) for training and testing. - - [Network Architecture](https://github.com/cszn/KAIR/blob/3357aa0e54b81b1e26ceb1cee990f39add235e17/models/network_usrnet.py#L309) - - [Dataset](https://github.com/cszn/KAIR/blob/6c852636d3715bb281637863822a42c72739122a/data/dataset_usrnet.py#L16) - - -Clone repo ----------- -``` -git clone https://github.com/cszn/KAIR.git -``` -``` -pip install -r requirement.txt -``` - - - -Training ----------- - -You should modify the json file from [options](https://github.com/cszn/KAIR/tree/master/options) first, for example, -setting ["gpu_ids": [0,1,2,3]](https://github.com/cszn/KAIR/blob/ff80d265f64de67dfb3ffa9beff8949773c81a3d/options/train_msrresnet_psnr.json#L4) if 4 GPUs are used, -setting ["dataroot_H": "trainsets/trainH"](https://github.com/cszn/KAIR/blob/ff80d265f64de67dfb3ffa9beff8949773c81a3d/options/train_msrresnet_psnr.json#L24) if path of the high quality dataset is `trainsets/trainH`. - -- Training with `DataParallel` - PSNR - - -```python -python main_train_psnr.py --opt options/train_msrresnet_psnr.json -``` - -- Training with `DataParallel` - GAN - -```python -python main_train_gan.py --opt options/train_msrresnet_gan.json -``` - -- Training with `DistributedDataParallel` - PSNR - 4 GPUs - -```python -python -m torch.distributed.launch --nproc_per_node=4 --master_port=1234 main_train_psnr.py --opt options/train_msrresnet_psnr.json --dist True -``` - -- Training with `DistributedDataParallel` - PSNR - 8 GPUs - -```python -python -m torch.distributed.launch --nproc_per_node=8 --master_port=1234 main_train_psnr.py --opt options/train_msrresnet_psnr.json --dist True -``` - -- Training with `DistributedDataParallel` - GAN - 4 GPUs - -```python -python -m torch.distributed.launch --nproc_per_node=4 --master_port=1234 main_train_gan.py --opt options/train_msrresnet_gan.json --dist True -``` - -- Training with `DistributedDataParallel` - GAN - 8 GPUs - -```python -python -m torch.distributed.launch --nproc_per_node=8 --master_port=1234 main_train_gan.py --opt options/train_msrresnet_gan.json --dist True -``` - -- Kill distributed training processes of `main_train_gan.py` - -```python -kill $(ps aux | grep main_train_gan.py | grep -v grep | awk '{print $2}') -``` - ----------- -| Method | Original Link | -|---|---| -| DnCNN |[https://github.com/cszn/DnCNN](https://github.com/cszn/DnCNN)| -| FDnCNN |[https://github.com/cszn/DnCNN](https://github.com/cszn/DnCNN)| -| FFDNet | [https://github.com/cszn/FFDNet](https://github.com/cszn/FFDNet)| -| SRMD | [https://github.com/cszn/SRMD](https://github.com/cszn/SRMD)| -| DPSR-SRResNet | [https://github.com/cszn/DPSR](https://github.com/cszn/DPSR)| -| SRResNet | [https://github.com/xinntao/BasicSR](https://github.com/xinntao/BasicSR)| -| ESRGAN | [https://github.com/xinntao/ESRGAN](https://github.com/xinntao/ESRGAN)| -| RRDB | [https://github.com/xinntao/ESRGAN](https://github.com/xinntao/ESRGAN)| -| IMDB | [https://github.com/Zheng222/IMDN](https://github.com/Zheng222/IMDN)| -| USRNet | [https://github.com/cszn/USRNet](https://github.com/cszn/USRNet)| -| DRUNet | [https://github.com/cszn/DPIR](https://github.com/cszn/DPIR)| -| DPIR | [https://github.com/cszn/DPIR](https://github.com/cszn/DPIR)| -| BSRGAN | [https://github.com/cszn/BSRGAN](https://github.com/cszn/BSRGAN)| -| SwinIR | [https://github.com/JingyunLiang/SwinIR](https://github.com/JingyunLiang/SwinIR)| -| VRT | [https://github.com/JingyunLiang/VRT](https://github.com/JingyunLiang/VRT) | - -Network architectures ----------- -* [USRNet](https://github.com/cszn/USRNet) - - - -* DnCNN - - - -* IRCNN denoiser - - - -* FFDNet - - - -* SRMD - - - -* SRResNet, SRGAN, RRDB, ESRGAN - - - -* IMDN - - ----- - - - -Testing ----------- -|Method | [model_zoo](model_zoo)| -|---|---| -| [main_test_dncnn.py](main_test_dncnn.py) |```dncnn_15.pth, dncnn_25.pth, dncnn_50.pth, dncnn_gray_blind.pth, dncnn_color_blind.pth, dncnn3.pth```| -| [main_test_ircnn_denoiser.py](main_test_ircnn_denoiser.py) | ```ircnn_gray.pth, ircnn_color.pth```| -| [main_test_fdncnn.py](main_test_fdncnn.py) | ```fdncnn_gray.pth, fdncnn_color.pth, fdncnn_gray_clip.pth, fdncnn_color_clip.pth```| -| [main_test_ffdnet.py](main_test_ffdnet.py) | ```ffdnet_gray.pth, ffdnet_color.pth, ffdnet_gray_clip.pth, ffdnet_color_clip.pth```| -| [main_test_srmd.py](main_test_srmd.py) | ```srmdnf_x2.pth, srmdnf_x3.pth, srmdnf_x4.pth, srmd_x2.pth, srmd_x3.pth, srmd_x4.pth```| -| | **The above models are converted from MatConvNet.** | -| [main_test_dpsr.py](main_test_dpsr.py) | ```dpsr_x2.pth, dpsr_x3.pth, dpsr_x4.pth, dpsr_x4_gan.pth```| -| [main_test_msrresnet.py](main_test_msrresnet.py) | ```msrresnet_x4_psnr.pth, msrresnet_x4_gan.pth```| -| [main_test_rrdb.py](main_test_rrdb.py) | ```rrdb_x4_psnr.pth, rrdb_x4_esrgan.pth```| -| [main_test_imdn.py](main_test_imdn.py) | ```imdn_x4.pth```| - -[model_zoo](model_zoo) --------- -- download link [https://drive.google.com/drive/folders/13kfr3qny7S2xwG9h7v95F5mkWs0OmU0D](https://drive.google.com/drive/folders/13kfr3qny7S2xwG9h7v95F5mkWs0OmU0D) - -[trainsets](trainsets) ----------- -- [https://github.com/xinntao/BasicSR/blob/master/docs/DatasetPreparation.md](https://github.com/xinntao/BasicSR/blob/master/docs/DatasetPreparation.md) -- [train400](https://github.com/cszn/DnCNN/tree/master/TrainingCodes/DnCNN_TrainingCodes_v1.0/data) -- [DIV2K](https://data.vision.ee.ethz.ch/cvl/DIV2K/) -- [Flickr2K](https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar) -- optional: use [split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=512, p_overlap=96, p_max=800)](https://github.com/cszn/KAIR/blob/3ee0bf3e07b90ec0b7302d97ee2adb780617e637/utils/utils_image.py#L123) to get ```trainsets/trainH``` with small images for fast data loading - -[testsets](testsets) ------------ -- [https://github.com/xinntao/BasicSR/blob/master/docs/DatasetPreparation.md](https://github.com/xinntao/BasicSR/blob/master/docs/DatasetPreparation.md) -- [set12](https://github.com/cszn/FFDNet/tree/master/testsets) -- [bsd68](https://github.com/cszn/FFDNet/tree/master/testsets) -- [cbsd68](https://github.com/cszn/FFDNet/tree/master/testsets) -- [kodak24](https://github.com/cszn/FFDNet/tree/master/testsets) -- [srbsd68](https://github.com/cszn/DPSR/tree/master/testsets/BSD68/GT) -- set5 -- set14 -- cbsd100 -- urban100 -- manga109 - - -References ----------- -```BibTex -@article{liang2022vrt, -title={VRT: A Video Restoration Transformer}, -author={Liang, Jingyun and Cao, Jiezhang and Fan, Yuchen and Zhang, Kai and Ranjan, Rakesh and Li, Yawei and Timofte, Radu and Van Gool, Luc}, -journal={arXiv preprint arXiv:2022.00000}, -year={2022} -} -@inproceedings{liang2021swinir, -title={SwinIR: Image Restoration Using Swin Transformer}, -author={Liang, Jingyun and Cao, Jiezhang and Sun, Guolei and Zhang, Kai and Van Gool, Luc and Timofte, Radu}, -booktitle={IEEE International Conference on Computer Vision Workshops}, -pages={1833--1844}, -year={2021} -} -@inproceedings{zhang2021designing, -title={Designing a Practical Degradation Model for Deep Blind Image Super-Resolution}, -author={Zhang, Kai and Liang, Jingyun and Van Gool, Luc and Timofte, Radu}, -booktitle={IEEE International Conference on Computer Vision}, -pages={4791--4800}, -year={2021} -} -@article{zhang2021plug, % DPIR & DRUNet & IRCNN - title={Plug-and-Play Image Restoration with Deep Denoiser Prior}, - author={Zhang, Kai and Li, Yawei and Zuo, Wangmeng and Zhang, Lei and Van Gool, Luc and Timofte, Radu}, - journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, - year={2021} -} -@inproceedings{zhang2020aim, % efficientSR_challenge - title={AIM 2020 Challenge on Efficient Super-Resolution: Methods and Results}, - author={Kai Zhang and Martin Danelljan and Yawei Li and Radu Timofte and others}, - booktitle={European Conference on Computer Vision Workshops}, - year={2020} -} -@inproceedings{zhang2020deep, % USRNet - title={Deep unfolding network for image super-resolution}, - author={Zhang, Kai and Van Gool, Luc and Timofte, Radu}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={3217--3226}, - year={2020} -} -@article{zhang2017beyond, % DnCNN - title={Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising}, - author={Zhang, Kai and Zuo, Wangmeng and Chen, Yunjin and Meng, Deyu and Zhang, Lei}, - journal={IEEE Transactions on Image Processing}, - volume={26}, - number={7}, - pages={3142--3155}, - year={2017} -} -@inproceedings{zhang2017learning, % IRCNN -title={Learning deep CNN denoiser prior for image restoration}, -author={Zhang, Kai and Zuo, Wangmeng and Gu, Shuhang and Zhang, Lei}, -booktitle={IEEE conference on computer vision and pattern recognition}, -pages={3929--3938}, -year={2017} -} -@article{zhang2018ffdnet, % FFDNet, FDnCNN - title={FFDNet: Toward a fast and flexible solution for CNN-based image denoising}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - journal={IEEE Transactions on Image Processing}, - volume={27}, - number={9}, - pages={4608--4622}, - year={2018} -} -@inproceedings{zhang2018learning, % SRMD - title={Learning a single convolutional super-resolution network for multiple degradations}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={3262--3271}, - year={2018} -} -@inproceedings{zhang2019deep, % DPSR - title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={1671--1681}, - year={2019} -} -@InProceedings{wang2018esrgan, % ESRGAN, MSRResNet - author = {Wang, Xintao and Yu, Ke and Wu, Shixiang and Gu, Jinjin and Liu, Yihao and Dong, Chao and Qiao, Yu and Loy, Chen Change}, - title = {ESRGAN: Enhanced super-resolution generative adversarial networks}, - booktitle = {The European Conference on Computer Vision Workshops (ECCVW)}, - month = {September}, - year = {2018} -} -@inproceedings{hui2019lightweight, % IMDN - title={Lightweight Image Super-Resolution with Information Multi-distillation Network}, - author={Hui, Zheng and Gao, Xinbo and Yang, Yunchu and Wang, Xiumei}, - booktitle={Proceedings of the 27th ACM International Conference on Multimedia (ACM MM)}, - pages={2024--2032}, - year={2019} -} -@inproceedings{zhang2019aim, % IMDN - title={AIM 2019 Challenge on Constrained Super-Resolution: Methods and Results}, - author={Kai Zhang and Shuhang Gu and Radu Timofte and others}, - booktitle={IEEE International Conference on Computer Vision Workshops}, - year={2019} -} -@inproceedings{yang2021gan, - title={GAN Prior Embedded Network for Blind Face Restoration in the Wild}, - author={Tao Yang, Peiran Ren, Xuansong Xie, and Lei Zhang}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - year={2021} -} -``` diff --git a/spaces/lekkalar/chatbot-pdf-gpt4key-langchain-chroma-prompttemp-tabs-dataframe-ocrmypdf-sqlite-csv-returns-json/README.md b/spaces/lekkalar/chatbot-pdf-gpt4key-langchain-chroma-prompttemp-tabs-dataframe-ocrmypdf-sqlite-csv-returns-json/README.md deleted file mode 100644 index 7e9aea773593f6bfd8391e41fb711965a43ec98c..0000000000000000000000000000000000000000 --- a/spaces/lekkalar/chatbot-pdf-gpt4key-langchain-chroma-prompttemp-tabs-dataframe-ocrmypdf-sqlite-csv-returns-json/README.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -title: >- - AskMoli - Chatbot For PDF - langchain,gpt4,chromadb,promptTemplate,ocrmypdf,sqlite,admin - page,dataframe,json response,csv,tabs -emoji: 👁 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false -duplicated_from: >- - lekkalar/chatgpt-for-pdf-using-langchain-gpt4-chromadb-prompttemplate-tabs-dataframe-ocrmypdf-sqlite-csv ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/libhost/tech/pages/_document.js b/spaces/libhost/tech/pages/_document.js deleted file mode 100644 index 54e8bf3e2a29015a45e11cdc279e06b459890d8b..0000000000000000000000000000000000000000 --- a/spaces/libhost/tech/pages/_document.js +++ /dev/null @@ -1,13 +0,0 @@ -import { Html, Head, Main, NextScript } from 'next/document' - -export default function Document() { - return ( - - - -
- - - - ) -} diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Clue Cluedo The Classic Mystery Game Key Serial Number.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Clue Cluedo The Classic Mystery Game Key Serial Number.md deleted file mode 100644 index 811ebaffb2ebf4fe044dddee772abe718a2f0d83..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Clue Cluedo The Classic Mystery Game Key Serial Number.md +++ /dev/null @@ -1,6 +0,0 @@ -

Clue Cluedo: The Classic Mystery Game key serial number


Download Ziphttps://bytlly.com/2uGwQ4



- -Buy Clue/Cluedo: The Classic Mystery Game. Your question ... Warranty Policy. No Warranty. Parody Classic Series: Clue Cluedo - Lost in Vegas Board Game. 4d29de3e1b
-
-
-

diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/How To Reset Resharper Evaluation Period [TOP].md b/spaces/lincquiQcaudo/Top-20-Diffusion/How To Reset Resharper Evaluation Period [TOP].md deleted file mode 100644 index ebbf885c40e48c06c73758b39c4a0c3c977aba5a..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/How To Reset Resharper Evaluation Period [TOP].md +++ /dev/null @@ -1,8 +0,0 @@ -
-

to clean the registry for all resharper eaps, run resharper.exe /uninstall.exe /uninstall resharper. this removes all resharper related registry entries for all versions of the eap. for more information, see rsrp-576961. the uninstall command runs with administrative privileges. once it has completed, the process completes and the entries are removed.

-

a new pycharm 2022.2 eap 3 build is available from ourwebsite, via thetoolbox app, or as a snap package (if you are using ubuntu). if you are on macos, there is a separate build for apple silicon (m1 chip). important: eap builds are not fully tested and might be unstable. keyboard shortcut to change the font size globally for this release, weve resolved a long-standing feature request by introducing a keyboard shortcut that changes the font size across all editors. to increase the font size, press. /alt+shift+period. to decrease it, press,/alt+shift+

-

How To Reset Resharper Evaluation Period


Download File ✶✶✶ https://bytlly.com/2uGxzY



-

trial software is basically a program you download and use for a certain period of time. the software may include full or limited features. whenever you install trial software, entries are downloaded into the registry much like other applications. to remove the entries and clean the registry, you have to first uninstall the trial application. cleaning and removing trial software registry entries after uninstalling helps minimize the possibility of future registry problems.

-

for corporate clients, resharper allows storing and distributing license tickets through jetbrains license server. the server allows enterprises to establish limited or unlimited number of product licenses and distribute them within the corporate network.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/lithiumice/SadTalker/src/audio2exp_models/networks.py b/spaces/lithiumice/SadTalker/src/audio2exp_models/networks.py deleted file mode 100644 index f052e18101f5446a527ae354b3621e7d0d4991cc..0000000000000000000000000000000000000000 --- a/spaces/lithiumice/SadTalker/src/audio2exp_models/networks.py +++ /dev/null @@ -1,74 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import nn - -class Conv2d(nn.Module): - def __init__(self, cin, cout, kernel_size, stride, padding, residual=False, use_act = True, *args, **kwargs): - super().__init__(*args, **kwargs) - self.conv_block = nn.Sequential( - nn.Conv2d(cin, cout, kernel_size, stride, padding), - nn.BatchNorm2d(cout) - ) - self.act = nn.ReLU() - self.residual = residual - self.use_act = use_act - - def forward(self, x): - out = self.conv_block(x) - if self.residual: - out += x - - if self.use_act: - return self.act(out) - else: - return out - -class SimpleWrapperV2(nn.Module): - def __init__(self) -> None: - super().__init__() - self.audio_encoder = nn.Sequential( - Conv2d(1, 32, kernel_size=3, stride=1, padding=1), - Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True), - - Conv2d(32, 64, kernel_size=3, stride=(3, 1), padding=1), - Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True), - - Conv2d(64, 128, kernel_size=3, stride=3, padding=1), - Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True), - - Conv2d(128, 256, kernel_size=3, stride=(3, 2), padding=1), - Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True), - - Conv2d(256, 512, kernel_size=3, stride=1, padding=0), - Conv2d(512, 512, kernel_size=1, stride=1, padding=0), - ) - - #### load the pre-trained audio_encoder - #self.audio_encoder = self.audio_encoder.to(device) - ''' - wav2lip_state_dict = torch.load('/apdcephfs_cq2/share_1290939/wenxuazhang/checkpoints/wav2lip.pth')['state_dict'] - state_dict = self.audio_encoder.state_dict() - - for k,v in wav2lip_state_dict.items(): - if 'audio_encoder' in k: - print('init:', k) - state_dict[k.replace('module.audio_encoder.', '')] = v - self.audio_encoder.load_state_dict(state_dict) - ''' - - self.mapping1 = nn.Linear(512+64+1, 64) - #self.mapping2 = nn.Linear(30, 64) - #nn.init.constant_(self.mapping1.weight, 0.) - nn.init.constant_(self.mapping1.bias, 0.) - - def forward(self, x, ref, ratio): - x = self.audio_encoder(x).view(x.size(0), -1) - ref_reshape = ref.reshape(x.size(0), -1) - ratio = ratio.reshape(x.size(0), -1) - - y = self.mapping1(torch.cat([x, ref_reshape, ratio], dim=1)) - out = y.reshape(ref.shape[0], ref.shape[1], -1) #+ ref # resudial - return out diff --git a/spaces/lizhen30/LangChainGo/test.py b/spaces/lizhen30/LangChainGo/test.py deleted file mode 100644 index f2353dd87bbf47e5eb04c3b5c6e7298b0b5bf09e..0000000000000000000000000000000000000000 --- a/spaces/lizhen30/LangChainGo/test.py +++ /dev/null @@ -1,33 +0,0 @@ -import time -import asyncio - -from langchain.llms import OpenAI - -def generate_serially(): - llm = OpenAI(temperature=0.9) - for _ in range(10): - resp = llm.generate(["Hello, how are you?"]) - print(resp.generations[0][0].text) - - -async def async_generate(llm): - resp = await llm.agenerate(["Hello, how are you?"]) - print(resp.generations[0][0].text) - - -async def generate_concurrently(): - llm = OpenAI(temperature=0.9) - tasks = [async_generate(llm) for _ in range(10)] - await asyncio.gather(*tasks) - - -s = time.perf_counter() -# If running this outside of Jupyter, use asyncio.run(generate_concurrently()) -generate_concurrently() -elapsed = time.perf_counter() - s -print('\033[1m' + f"Concurrent executed in {elapsed:0.2f} seconds." + '\033[0m') - -s = time.perf_counter() -generate_serially() -elapsed = time.perf_counter() - s -print('\033[1m' + f"Serial executed in {elapsed:0.2f} seconds." + '\033[0m') \ No newline at end of file diff --git a/spaces/luckwill/chiakicc/text/japanese.py b/spaces/luckwill/chiakicc/text/japanese.py deleted file mode 100644 index 375e4d50872d5c68ee57ca17470a2ca425425eba..0000000000000000000000000000000000000000 --- a/spaces/luckwill/chiakicc/text/japanese.py +++ /dev/null @@ -1,153 +0,0 @@ -import re -from unidecode import unidecode -import pyopenjtalk - - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - -# List of (romaji, ipa) pairs for marks: -_romaji_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ts', 'ʦ'), - ('u', 'ɯ'), - ('j', 'ʥ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (romaji, ipa2) pairs for marks: -_romaji_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('u', 'ɯ'), - ('ʧ', 'tʃ'), - ('j', 'dʑ'), - ('y', 'j'), - ('ni', 'n^i'), - ('nj', 'n^'), - ('hi', 'çi'), - ('hj', 'ç'), - ('f', 'ɸ'), - ('I', 'i*'), - ('U', 'ɯ*'), - ('r', 'ɾ') -]] - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_romaji_with_accent(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text != '': - text += ' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil', 'pau']: - text += phoneme.replace('ch', 'ʧ').replace('sh', - 'ʃ').replace('cl', 'Q') - else: - continue - # n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']: - a2_next = -1 - else: - a2_next = int( - re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i < len(marks): - text += unidecode(marks[i]).replace(' ', '') - return text - - -def get_real_sokuon(text): - for regex, replacement in _real_sokuon: - text = re.sub(regex, replacement, text) - return text - - -def get_real_hatsuon(text): - for regex, replacement in _real_hatsuon: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = re.sub( - r'([aiueo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa2(text): - text = japanese_to_romaji_with_accent(text).replace('...', '…') - text = get_real_sokuon(text) - text = get_real_hatsuon(text) - for regex, replacement in _romaji_to_ipa2: - text = re.sub(regex, replacement, text) - return text - - -def japanese_to_ipa3(text): - text = japanese_to_ipa2(text).replace('n^', 'ȵ').replace( - 'ʃ', 'ɕ').replace('*', '\u0325').replace('#', '\u031a') - text = re.sub( - r'([aiɯeo])\1+', lambda x: x.group(0)[0]+'ː'*(len(x.group(0))-1), text) - text = re.sub(r'((?:^|\s)(?:ts|tɕ|[kpt]))', r'\1ʰ', text) - return text diff --git a/spaces/luigisaetta/whisper-demo/app.py b/spaces/luigisaetta/whisper-demo/app.py deleted file mode 100644 index a69ebfd3052770f0d24de6a2c1e32568dd6425fa..0000000000000000000000000000000000000000 --- a/spaces/luigisaetta/whisper-demo/app.py +++ /dev/null @@ -1,97 +0,0 @@ -import torch - -import gradio as gr -import pytube as pt -from transformers import pipeline -from huggingface_hub import model_info - -MODEL_NAME = "luigisaetta/whisper-medium-it" #this always needs to stay in line 8 :D sorry for the hackiness -lang = "it" - -device = 0 if torch.cuda.is_available() else "cpu" -pipe = pipeline( - task="automatic-speech-recognition", - model=MODEL_NAME, - chunk_length_s=30, - device=device, -) - -pipe.model.config.forced_decoder_ids = pipe.tokenizer.get_decoder_prompt_ids(language=lang, task="transcribe") - -def transcribe(microphone, file_upload): - warn_output = "" - if (microphone is not None) and (file_upload is not None): - warn_output = ( - "WARNING: You've uploaded an audio file and used the microphone. " - "The recorded file from the microphone will be used and the uploaded audio will be discarded.\n" - ) - - elif (microphone is None) and (file_upload is None): - return "ERROR: You have to either use the microphone or upload an audio file" - - file = microphone if microphone is not None else file_upload - - text = pipe(file)["text"] - - return warn_output + text - - -def _return_yt_html_embed(yt_url): - video_id = yt_url.split("?v=")[-1] - HTML_str = ( - f'
' - "
" - ) - return HTML_str - - -def yt_transcribe(yt_url): - yt = pt.YouTube(yt_url) - html_embed_str = _return_yt_html_embed(yt_url) - stream = yt.streams.filter(only_audio=True)[0] - stream.download(filename="audio.mp3") - - text = pipe("audio.mp3")["text"] - - return html_embed_str, text - - -demo = gr.Blocks() - -mf_transcribe = gr.Interface( - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="microphone", type="filepath", optional=True), - gr.inputs.Audio(source="upload", type="filepath", optional=True), - ], - outputs="text", - layout="horizontal", - theme="huggingface", - title="Whisper Demo: Transcribe Audio", - description=( - "Transcribe long-form microphone or audio inputs with the click of a button! Demo uses the the fine-tuned" - f" checkpoint [{MODEL_NAME}](https://huggingface.co/{MODEL_NAME}) and 🤗 Transformers to transcribe audio files" - " of arbitrary length." - ), - allow_flagging="never", -) - -yt_transcribe = gr.Interface( - fn=yt_transcribe, - inputs=[gr.inputs.Textbox(lines=1, placeholder="Paste the URL to a YouTube video here", label="YouTube URL")], - outputs=["html", "text"], - layout="horizontal", - theme="huggingface", - title="Whisper Demo: Transcribe YouTube", - description=( - "Transcribe long-form YouTube videos with the click of a button! Demo uses the the fine-tuned checkpoint:" - f" [{MODEL_NAME}](https://huggingface.co/{MODEL_NAME}) and 🤗 Transformers to transcribe audio files of" - " arbitrary length." - ), - allow_flagging="never", -) - -with demo: - gr.TabbedInterface([mf_transcribe, yt_transcribe], ["Transcribe Audio", "Transcribe YouTube"]) - -demo.launch(enable_queue=True) diff --git a/spaces/lunarflu/HF-QA-Demo-3/tests/__init__.py b/spaces/lunarflu/HF-QA-Demo-3/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/lysine/auscultate/src/app/App.tsx b/spaces/lysine/auscultate/src/app/App.tsx deleted file mode 100644 index eb499f846cf3944b32515e6cf7e9df542fb002a3..0000000000000000000000000000000000000000 --- a/spaces/lysine/auscultate/src/app/App.tsx +++ /dev/null @@ -1,67 +0,0 @@ -import React from 'react'; -import { Helmet } from 'react-helmet'; -import { useNavigate } from 'react-router-dom'; -import HeartSvg from './heart.svg'; -import LungsSvg from './lungs.svg'; - -const links = [ - { - icon: , - title: 'Heart Sounds', - description: 'From the CirCor DigiScope Phonocardiogram Dataset', - link: '/heart', - }, - { - icon: , - title: 'Breath Sounds', - description: 'From the Respiratory Sound Database', - link: '/breath', - }, -]; - -export default function App() { - const navigate = useNavigate(); - return ( -
- - Auscultation Database - -
-
    -
  • - Med -
  • -
  • Auscultation
  • -
-
-

Auscultation Database

-

Auscultation practice with annotated sound tracks.

-
- {links.map(link => ( -
-
{link.icon}
-
-

{link.title}

-

{link.description}

-
- -
-
-
- ))} -
-
- ); -} diff --git a/spaces/ma-xu/LIVE/cuda_utils.h b/spaces/ma-xu/LIVE/cuda_utils.h deleted file mode 100644 index 1e4609babc129a27397df72879bd6c8f55e71d1a..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/cuda_utils.h +++ /dev/null @@ -1,53 +0,0 @@ -#pragma once - -#ifdef __CUDACC__ - #include - #include -#endif -#include -#include -#include - -#ifdef __CUDACC__ -#define checkCuda(x) do { if((x)!=cudaSuccess) { \ - printf("CUDA Runtime Error: %s at %s:%d\n",\ - cudaGetErrorString(x),__FILE__,__LINE__);\ - exit(1);}} while(0) -#endif - -template -DEVICE -inline T infinity() { -#ifdef __CUDA_ARCH__ - const unsigned long long ieee754inf = 0x7ff0000000000000; - return __longlong_as_double(ieee754inf); -#else - return std::numeric_limits::infinity(); -#endif -} - -template <> -DEVICE -inline double infinity() { -#ifdef __CUDA_ARCH__ - return __longlong_as_double(0x7ff0000000000000ULL); -#else - return std::numeric_limits::infinity(); -#endif -} - -template <> -DEVICE -inline float infinity() { -#ifdef __CUDA_ARCH__ - return __int_as_float(0x7f800000); -#else - return std::numeric_limits::infinity(); -#endif -} - -inline void cuda_synchronize() { -#ifdef __CUDACC__ - checkCuda(cudaDeviceSynchronize()); -#endif -} diff --git a/spaces/ma-xu/LIVE/pydiffvg_tensorflow/pixel_filter.py b/spaces/ma-xu/LIVE/pydiffvg_tensorflow/pixel_filter.py deleted file mode 100644 index 0eff01742bfcea55240dc4d2c50006e3dd42aadb..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/pydiffvg_tensorflow/pixel_filter.py +++ /dev/null @@ -1,8 +0,0 @@ -import tensorflow as tf - -class PixelFilter: - def __init__(self, - type, - radius = tf.constant(0.5)): - self.type = type - self.radius = radius diff --git a/spaces/ma-xu/LIVE/thrust/thrust/memory/detail/device_system_resource.h b/spaces/ma-xu/LIVE/thrust/thrust/memory/detail/device_system_resource.h deleted file mode 100644 index 9e94991d6124c42702ce44795c100d38a1016fe1..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/memory/detail/device_system_resource.h +++ /dev/null @@ -1,39 +0,0 @@ -/* - * Copyright 2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// #include the device system's memory_resource header -#define __THRUST_DEVICE_SYSTEM_MEMORY_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/memory_resource.h> -#include __THRUST_DEVICE_SYSTEM_MEMORY_HEADER -#undef __THRUST_DEVICE_SYSTEM_MEMORY_HEADER - -namespace thrust -{ - - -typedef thrust::system::__THRUST_DEVICE_SYSTEM_NAMESPACE::memory_resource - device_memory_resource; -typedef thrust::system::__THRUST_DEVICE_SYSTEM_NAMESPACE::universal_memory_resource - universal_memory_resource; -typedef thrust::system::__THRUST_DEVICE_SYSTEM_NAMESPACE::universal_host_pinned_memory_resource - universal_host_pinned_memory_resource; - - -} // end thrust - diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/ops/dcn/deform_conv.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/ops/dcn/deform_conv.py deleted file mode 100644 index 6268ca825d59ef4a30d4d2156c4438cbbe9b3c1e..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/ops/dcn/deform_conv.py +++ /dev/null @@ -1,379 +0,0 @@ -import math -import os -import torch -from torch import nn as nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn import functional as F -from torch.nn.modules.utils import _pair, _single - -BASICSR_JIT = os.getenv('BASICSR_JIT') -if BASICSR_JIT == 'True': - from torch.utils.cpp_extension import load - module_path = os.path.dirname(__file__) - deform_conv_ext = load( - 'deform_conv', - sources=[ - os.path.join(module_path, 'src', 'deform_conv_ext.cpp'), - os.path.join(module_path, 'src', 'deform_conv_cuda.cpp'), - os.path.join(module_path, 'src', 'deform_conv_cuda_kernel.cu'), - ], - ) -else: - try: - from . import deform_conv_ext - except ImportError: - pass - # avoid annoying print output - # print(f'Cannot import deform_conv_ext. Error: {error}. You may need to: \n ' - # '1. compile with BASICSR_EXT=True. or\n ' - # '2. set BASICSR_JIT=True during running') - - -class DeformConvFunction(Function): - - @staticmethod - def forward(ctx, - input, - offset, - weight, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - im2col_step=64): - if input is not None and input.dim() != 4: - raise ValueError(f'Expected 4D tensor as input, got {input.dim()}D tensor instead.') - ctx.stride = _pair(stride) - ctx.padding = _pair(padding) - ctx.dilation = _pair(dilation) - ctx.groups = groups - ctx.deformable_groups = deformable_groups - ctx.im2col_step = im2col_step - - ctx.save_for_backward(input, offset, weight) - - output = input.new_empty(DeformConvFunction._output_size(input, weight, ctx.padding, ctx.dilation, ctx.stride)) - - ctx.bufs_ = [input.new_empty(0), input.new_empty(0)] # columns, ones - - if not input.is_cuda: - raise NotImplementedError - else: - cur_im2col_step = min(ctx.im2col_step, input.shape[0]) - assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize' - deform_conv_ext.deform_conv_forward(input, weight, - offset, output, ctx.bufs_[0], ctx.bufs_[1], weight.size(3), - weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1], - ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups, - ctx.deformable_groups, cur_im2col_step) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input, offset, weight = ctx.saved_tensors - - grad_input = grad_offset = grad_weight = None - - if not grad_output.is_cuda: - raise NotImplementedError - else: - cur_im2col_step = min(ctx.im2col_step, input.shape[0]) - assert (input.shape[0] % cur_im2col_step) == 0, 'im2col step must divide batchsize' - - if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]: - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - deform_conv_ext.deform_conv_backward_input(input, offset, grad_output, grad_input, - grad_offset, weight, ctx.bufs_[0], weight.size(3), - weight.size(2), ctx.stride[1], ctx.stride[0], ctx.padding[1], - ctx.padding[0], ctx.dilation[1], ctx.dilation[0], ctx.groups, - ctx.deformable_groups, cur_im2col_step) - - if ctx.needs_input_grad[2]: - grad_weight = torch.zeros_like(weight) - deform_conv_ext.deform_conv_backward_parameters(input, offset, grad_output, grad_weight, - ctx.bufs_[0], ctx.bufs_[1], weight.size(3), - weight.size(2), ctx.stride[1], ctx.stride[0], - ctx.padding[1], ctx.padding[0], ctx.dilation[1], - ctx.dilation[0], ctx.groups, ctx.deformable_groups, 1, - cur_im2col_step) - - return (grad_input, grad_offset, grad_weight, None, None, None, None, None) - - @staticmethod - def _output_size(input, weight, padding, dilation, stride): - channels = weight.size(0) - output_size = (input.size(0), channels) - for d in range(input.dim() - 2): - in_size = input.size(d + 2) - pad = padding[d] - kernel = dilation[d] * (weight.size(d + 2) - 1) + 1 - stride_ = stride[d] - output_size += ((in_size + (2 * pad) - kernel) // stride_ + 1, ) - if not all(map(lambda s: s > 0, output_size)): - raise ValueError(f'convolution input is too small (output would be {"x".join(map(str, output_size))})') - return output_size - - -class ModulatedDeformConvFunction(Function): - - @staticmethod - def forward(ctx, - input, - offset, - mask, - weight, - bias=None, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1): - ctx.stride = stride - ctx.padding = padding - ctx.dilation = dilation - ctx.groups = groups - ctx.deformable_groups = deformable_groups - ctx.with_bias = bias is not None - if not ctx.with_bias: - bias = input.new_empty(1) # fake tensor - if not input.is_cuda: - raise NotImplementedError - if weight.requires_grad or mask.requires_grad or offset.requires_grad or input.requires_grad: - ctx.save_for_backward(input, offset, mask, weight, bias) - output = input.new_empty(ModulatedDeformConvFunction._infer_shape(ctx, input, weight)) - ctx._bufs = [input.new_empty(0), input.new_empty(0)] - deform_conv_ext.modulated_deform_conv_forward(input, weight, bias, ctx._bufs[0], offset, mask, output, - ctx._bufs[1], weight.shape[2], weight.shape[3], ctx.stride, - ctx.stride, ctx.padding, ctx.padding, ctx.dilation, ctx.dilation, - ctx.groups, ctx.deformable_groups, ctx.with_bias) - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - if not grad_output.is_cuda: - raise NotImplementedError - input, offset, mask, weight, bias = ctx.saved_tensors - grad_input = torch.zeros_like(input) - grad_offset = torch.zeros_like(offset) - grad_mask = torch.zeros_like(mask) - grad_weight = torch.zeros_like(weight) - grad_bias = torch.zeros_like(bias) - deform_conv_ext.modulated_deform_conv_backward(input, weight, bias, ctx._bufs[0], offset, mask, ctx._bufs[1], - grad_input, grad_weight, grad_bias, grad_offset, grad_mask, - grad_output, weight.shape[2], weight.shape[3], ctx.stride, - ctx.stride, ctx.padding, ctx.padding, ctx.dilation, ctx.dilation, - ctx.groups, ctx.deformable_groups, ctx.with_bias) - if not ctx.with_bias: - grad_bias = None - - return (grad_input, grad_offset, grad_mask, grad_weight, grad_bias, None, None, None, None, None) - - @staticmethod - def _infer_shape(ctx, input, weight): - n = input.size(0) - channels_out = weight.size(0) - height, width = input.shape[2:4] - kernel_h, kernel_w = weight.shape[2:4] - height_out = (height + 2 * ctx.padding - (ctx.dilation * (kernel_h - 1) + 1)) // ctx.stride + 1 - width_out = (width + 2 * ctx.padding - (ctx.dilation * (kernel_w - 1) + 1)) // ctx.stride + 1 - return n, channels_out, height_out, width_out - - -deform_conv = DeformConvFunction.apply -modulated_deform_conv = ModulatedDeformConvFunction.apply - - -class DeformConv(nn.Module): - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - bias=False): - super(DeformConv, self).__init__() - - assert not bias - assert in_channels % groups == 0, f'in_channels {in_channels} is not divisible by groups {groups}' - assert out_channels % groups == 0, f'out_channels {out_channels} is not divisible by groups {groups}' - - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = _pair(stride) - self.padding = _pair(padding) - self.dilation = _pair(dilation) - self.groups = groups - self.deformable_groups = deformable_groups - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // self.groups, *self.kernel_size)) - - self.reset_parameters() - - def reset_parameters(self): - n = self.in_channels - for k in self.kernel_size: - n *= k - stdv = 1. / math.sqrt(n) - self.weight.data.uniform_(-stdv, stdv) - - def forward(self, x, offset): - # To fix an assert error in deform_conv_cuda.cpp:128 - # input image is smaller than kernel - input_pad = (x.size(2) < self.kernel_size[0] or x.size(3) < self.kernel_size[1]) - if input_pad: - pad_h = max(self.kernel_size[0] - x.size(2), 0) - pad_w = max(self.kernel_size[1] - x.size(3), 0) - x = F.pad(x, (0, pad_w, 0, pad_h), 'constant', 0).contiguous() - offset = F.pad(offset, (0, pad_w, 0, pad_h), 'constant', 0).contiguous() - out = deform_conv(x, offset, self.weight, self.stride, self.padding, self.dilation, self.groups, - self.deformable_groups) - if input_pad: - out = out[:, :, :out.size(2) - pad_h, :out.size(3) - pad_w].contiguous() - return out - - -class DeformConvPack(DeformConv): - """A Deformable Conv Encapsulation that acts as normal Conv layers. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(DeformConvPack, self).__init__(*args, **kwargs) - - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deformable_groups * 2 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=_pair(self.stride), - padding=_pair(self.padding), - dilation=_pair(self.dilation), - bias=True) - self.init_offset() - - def init_offset(self): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - offset = self.conv_offset(x) - return deform_conv(x, offset, self.weight, self.stride, self.padding, self.dilation, self.groups, - self.deformable_groups) - - -class ModulatedDeformConv(nn.Module): - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - groups=1, - deformable_groups=1, - bias=True): - super(ModulatedDeformConv, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = _pair(kernel_size) - self.stride = stride - self.padding = padding - self.dilation = dilation - self.groups = groups - self.deformable_groups = deformable_groups - self.with_bias = bias - # enable compatibility with nn.Conv2d - self.transposed = False - self.output_padding = _single(0) - - self.weight = nn.Parameter(torch.Tensor(out_channels, in_channels // groups, *self.kernel_size)) - if bias: - self.bias = nn.Parameter(torch.Tensor(out_channels)) - else: - self.register_parameter('bias', None) - self.init_weights() - - def init_weights(self): - n = self.in_channels - for k in self.kernel_size: - n *= k - stdv = 1. / math.sqrt(n) - self.weight.data.uniform_(-stdv, stdv) - if self.bias is not None: - self.bias.data.zero_() - - def forward(self, x, offset, mask): - return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation, - self.groups, self.deformable_groups) - - -class ModulatedDeformConvPack(ModulatedDeformConv): - """A ModulatedDeformable Conv Encapsulation that acts as normal Conv layers. - - Args: - in_channels (int): Same as nn.Conv2d. - out_channels (int): Same as nn.Conv2d. - kernel_size (int or tuple[int]): Same as nn.Conv2d. - stride (int or tuple[int]): Same as nn.Conv2d. - padding (int or tuple[int]): Same as nn.Conv2d. - dilation (int or tuple[int]): Same as nn.Conv2d. - groups (int): Same as nn.Conv2d. - bias (bool or str): If specified as `auto`, it will be decided by the - norm_cfg. Bias will be set as True if norm_cfg is None, otherwise - False. - """ - - _version = 2 - - def __init__(self, *args, **kwargs): - super(ModulatedDeformConvPack, self).__init__(*args, **kwargs) - - self.conv_offset = nn.Conv2d( - self.in_channels, - self.deformable_groups * 3 * self.kernel_size[0] * self.kernel_size[1], - kernel_size=self.kernel_size, - stride=_pair(self.stride), - padding=_pair(self.padding), - dilation=_pair(self.dilation), - bias=True) - self.init_weights() - - def init_weights(self): - super(ModulatedDeformConvPack, self).init_weights() - if hasattr(self, 'conv_offset'): - self.conv_offset.weight.data.zero_() - self.conv_offset.bias.data.zero_() - - def forward(self, x): - out = self.conv_offset(x) - o1, o2, mask = torch.chunk(out, 3, dim=1) - offset = torch.cat((o1, o2), dim=1) - mask = torch.sigmoid(mask) - return modulated_deform_conv(x, offset, mask, self.weight, self.bias, self.stride, self.padding, self.dilation, - self.groups, self.deformable_groups) diff --git a/spaces/mandar100/chatbot_godel_large/app.py b/spaces/mandar100/chatbot_godel_large/app.py deleted file mode 100644 index 53e7e951709d40af7ecc0b265ac4f9774c8058e0..0000000000000000000000000000000000000000 --- a/spaces/mandar100/chatbot_godel_large/app.py +++ /dev/null @@ -1,36 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -# In[ ]: - - -import gradio as gr -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM - -tokenizer = AutoTokenizer.from_pretrained("microsoft/GODEL-v1_1-large-seq2seq") -model = AutoModelForSeq2SeqLM.from_pretrained("microsoft/GODEL-v1_1-large-seq2seq") - -def predict(input,knowledge, history=[]): -# instruction="Instruction: given a dialog context and related knowledge, you need to answer the question based on the knowledge." - instruction="Instruction: given a dialog context, you need to response empathically" - knowledge = '[KNOWLEDGE]' + knowledge - s = list(sum(history, ())) - s.append(input) - dialog = ' EOS ' .join(s) - query = f"{instruction} [CONTEXT] {dialog} {knowledge}" - top_p = 0.9 - min_length = 8 - max_length = 64 - new_user_input_ids = tokenizer.encode(f"{query}", return_tensors='pt') - print(input,s) - output = model.generate(new_user_input_ids, min_length=int( - min_length), max_length=int(max_length), top_p=top_p, do_sample=True).tolist() - response = tokenizer.decode(output[0], skip_special_tokens=True) - history.append((input, response)) - return history, history - -gr.Interface(fn=predict, - inputs=["text","text",'state'], - - outputs=["chatbot",'state']).launch() - diff --git a/spaces/manhkhanhUIT/BOPBTL/Face_Detection/align_warp_back_multiple_dlib_HR.py b/spaces/manhkhanhUIT/BOPBTL/Face_Detection/align_warp_back_multiple_dlib_HR.py deleted file mode 100644 index f3711c968ebeba22f3872b8074b7c89f55a634a1..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/BOPBTL/Face_Detection/align_warp_back_multiple_dlib_HR.py +++ /dev/null @@ -1,437 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import torch -import numpy as np -import skimage.io as io - -# from face_sdk import FaceDetection -import matplotlib.pyplot as plt -from matplotlib.patches import Rectangle -from skimage.transform import SimilarityTransform -from skimage.transform import warp -from PIL import Image, ImageFilter -import torch.nn.functional as F -import torchvision as tv -import torchvision.utils as vutils -import time -import cv2 -import os -from skimage import img_as_ubyte -import json -import argparse -import dlib - - -def calculate_cdf(histogram): - """ - This method calculates the cumulative distribution function - :param array histogram: The values of the histogram - :return: normalized_cdf: The normalized cumulative distribution function - :rtype: array - """ - # Get the cumulative sum of the elements - cdf = histogram.cumsum() - - # Normalize the cdf - normalized_cdf = cdf / float(cdf.max()) - - return normalized_cdf - - -def calculate_lookup(src_cdf, ref_cdf): - """ - This method creates the lookup table - :param array src_cdf: The cdf for the source image - :param array ref_cdf: The cdf for the reference image - :return: lookup_table: The lookup table - :rtype: array - """ - lookup_table = np.zeros(256) - lookup_val = 0 - for src_pixel_val in range(len(src_cdf)): - lookup_val - for ref_pixel_val in range(len(ref_cdf)): - if ref_cdf[ref_pixel_val] >= src_cdf[src_pixel_val]: - lookup_val = ref_pixel_val - break - lookup_table[src_pixel_val] = lookup_val - return lookup_table - - -def match_histograms(src_image, ref_image): - """ - This method matches the source image histogram to the - reference signal - :param image src_image: The original source image - :param image ref_image: The reference image - :return: image_after_matching - :rtype: image (array) - """ - # Split the images into the different color channels - # b means blue, g means green and r means red - src_b, src_g, src_r = cv2.split(src_image) - ref_b, ref_g, ref_r = cv2.split(ref_image) - - # Compute the b, g, and r histograms separately - # The flatten() Numpy method returns a copy of the array c - # collapsed into one dimension. - src_hist_blue, bin_0 = np.histogram(src_b.flatten(), 256, [0, 256]) - src_hist_green, bin_1 = np.histogram(src_g.flatten(), 256, [0, 256]) - src_hist_red, bin_2 = np.histogram(src_r.flatten(), 256, [0, 256]) - ref_hist_blue, bin_3 = np.histogram(ref_b.flatten(), 256, [0, 256]) - ref_hist_green, bin_4 = np.histogram(ref_g.flatten(), 256, [0, 256]) - ref_hist_red, bin_5 = np.histogram(ref_r.flatten(), 256, [0, 256]) - - # Compute the normalized cdf for the source and reference image - src_cdf_blue = calculate_cdf(src_hist_blue) - src_cdf_green = calculate_cdf(src_hist_green) - src_cdf_red = calculate_cdf(src_hist_red) - ref_cdf_blue = calculate_cdf(ref_hist_blue) - ref_cdf_green = calculate_cdf(ref_hist_green) - ref_cdf_red = calculate_cdf(ref_hist_red) - - # Make a separate lookup table for each color - blue_lookup_table = calculate_lookup(src_cdf_blue, ref_cdf_blue) - green_lookup_table = calculate_lookup(src_cdf_green, ref_cdf_green) - red_lookup_table = calculate_lookup(src_cdf_red, ref_cdf_red) - - # Use the lookup function to transform the colors of the original - # source image - blue_after_transform = cv2.LUT(src_b, blue_lookup_table) - green_after_transform = cv2.LUT(src_g, green_lookup_table) - red_after_transform = cv2.LUT(src_r, red_lookup_table) - - # Put the image back together - image_after_matching = cv2.merge([blue_after_transform, green_after_transform, red_after_transform]) - image_after_matching = cv2.convertScaleAbs(image_after_matching) - - return image_after_matching - - -def _standard_face_pts(): - pts = ( - np.array([196.0, 226.0, 316.0, 226.0, 256.0, 286.0, 220.0, 360.4, 292.0, 360.4], np.float32) / 256.0 - - 1.0 - ) - - return np.reshape(pts, (5, 2)) - - -def _origin_face_pts(): - pts = np.array([196.0, 226.0, 316.0, 226.0, 256.0, 286.0, 220.0, 360.4, 292.0, 360.4], np.float32) - - return np.reshape(pts, (5, 2)) - - -def compute_transformation_matrix(img, landmark, normalize, target_face_scale=1.0): - - std_pts = _standard_face_pts() # [-1,1] - target_pts = (std_pts * target_face_scale + 1) / 2 * 512.0 - - # print(target_pts) - - h, w, c = img.shape - if normalize == True: - landmark[:, 0] = landmark[:, 0] / h * 2 - 1.0 - landmark[:, 1] = landmark[:, 1] / w * 2 - 1.0 - - # print(landmark) - - affine = SimilarityTransform() - - affine.estimate(target_pts, landmark) - - return affine - - -def compute_inverse_transformation_matrix(img, landmark, normalize, target_face_scale=1.0): - - std_pts = _standard_face_pts() # [-1,1] - target_pts = (std_pts * target_face_scale + 1) / 2 * 512.0 - - # print(target_pts) - - h, w, c = img.shape - if normalize == True: - landmark[:, 0] = landmark[:, 0] / h * 2 - 1.0 - landmark[:, 1] = landmark[:, 1] / w * 2 - 1.0 - - # print(landmark) - - affine = SimilarityTransform() - - affine.estimate(landmark, target_pts) - - return affine - - -def show_detection(image, box, landmark): - plt.imshow(image) - print(box[2] - box[0]) - plt.gca().add_patch( - Rectangle( - (box[1], box[0]), box[2] - box[0], box[3] - box[1], linewidth=1, edgecolor="r", facecolor="none" - ) - ) - plt.scatter(landmark[0][0], landmark[0][1]) - plt.scatter(landmark[1][0], landmark[1][1]) - plt.scatter(landmark[2][0], landmark[2][1]) - plt.scatter(landmark[3][0], landmark[3][1]) - plt.scatter(landmark[4][0], landmark[4][1]) - plt.show() - - -def affine2theta(affine, input_w, input_h, target_w, target_h): - # param = np.linalg.inv(affine) - param = affine - theta = np.zeros([2, 3]) - theta[0, 0] = param[0, 0] * input_h / target_h - theta[0, 1] = param[0, 1] * input_w / target_h - theta[0, 2] = (2 * param[0, 2] + param[0, 0] * input_h + param[0, 1] * input_w) / target_h - 1 - theta[1, 0] = param[1, 0] * input_h / target_w - theta[1, 1] = param[1, 1] * input_w / target_w - theta[1, 2] = (2 * param[1, 2] + param[1, 0] * input_h + param[1, 1] * input_w) / target_w - 1 - return theta - - -def blur_blending(im1, im2, mask): - - mask *= 255.0 - - kernel = np.ones((10, 10), np.uint8) - mask = cv2.erode(mask, kernel, iterations=1) - - mask = Image.fromarray(mask.astype("uint8")).convert("L") - im1 = Image.fromarray(im1.astype("uint8")) - im2 = Image.fromarray(im2.astype("uint8")) - - mask_blur = mask.filter(ImageFilter.GaussianBlur(20)) - im = Image.composite(im1, im2, mask) - - im = Image.composite(im, im2, mask_blur) - - return np.array(im) / 255.0 - - -def blur_blending_cv2(im1, im2, mask): - - mask *= 255.0 - - kernel = np.ones((9, 9), np.uint8) - mask = cv2.erode(mask, kernel, iterations=3) - - mask_blur = cv2.GaussianBlur(mask, (25, 25), 0) - mask_blur /= 255.0 - - im = im1 * mask_blur + (1 - mask_blur) * im2 - - im /= 255.0 - im = np.clip(im, 0.0, 1.0) - - return im - - -# def Poisson_blending(im1,im2,mask): - - -# Image.composite( -def Poisson_blending(im1, im2, mask): - - # mask=1-mask - mask *= 255 - kernel = np.ones((10, 10), np.uint8) - mask = cv2.erode(mask, kernel, iterations=1) - mask /= 255 - mask = 1 - mask - mask *= 255 - - mask = mask[:, :, 0] - width, height, channels = im1.shape - center = (int(height / 2), int(width / 2)) - result = cv2.seamlessClone( - im2.astype("uint8"), im1.astype("uint8"), mask.astype("uint8"), center, cv2.MIXED_CLONE - ) - - return result / 255.0 - - -def Poisson_B(im1, im2, mask, center): - - mask *= 255 - - result = cv2.seamlessClone( - im2.astype("uint8"), im1.astype("uint8"), mask.astype("uint8"), center, cv2.NORMAL_CLONE - ) - - return result / 255 - - -def seamless_clone(old_face, new_face, raw_mask): - - height, width, _ = old_face.shape - height = height // 2 - width = width // 2 - - y_indices, x_indices, _ = np.nonzero(raw_mask) - y_crop = slice(np.min(y_indices), np.max(y_indices)) - x_crop = slice(np.min(x_indices), np.max(x_indices)) - y_center = int(np.rint((np.max(y_indices) + np.min(y_indices)) / 2 + height)) - x_center = int(np.rint((np.max(x_indices) + np.min(x_indices)) / 2 + width)) - - insertion = np.rint(new_face[y_crop, x_crop] * 255.0).astype("uint8") - insertion_mask = np.rint(raw_mask[y_crop, x_crop] * 255.0).astype("uint8") - insertion_mask[insertion_mask != 0] = 255 - prior = np.rint(np.pad(old_face * 255.0, ((height, height), (width, width), (0, 0)), "constant")).astype( - "uint8" - ) - # if np.sum(insertion_mask) == 0: - n_mask = insertion_mask[1:-1, 1:-1, :] - n_mask = cv2.copyMakeBorder(n_mask, 1, 1, 1, 1, cv2.BORDER_CONSTANT, 0) - print(n_mask.shape) - x, y, w, h = cv2.boundingRect(n_mask[:, :, 0]) - if w < 4 or h < 4: - blended = prior - else: - blended = cv2.seamlessClone( - insertion, # pylint: disable=no-member - prior, - insertion_mask, - (x_center, y_center), - cv2.NORMAL_CLONE, - ) # pylint: disable=no-member - - blended = blended[height:-height, width:-width] - - return blended.astype("float32") / 255.0 - - -def get_landmark(face_landmarks, id): - part = face_landmarks.part(id) - x = part.x - y = part.y - - return (x, y) - - -def search(face_landmarks): - - x1, y1 = get_landmark(face_landmarks, 36) - x2, y2 = get_landmark(face_landmarks, 39) - x3, y3 = get_landmark(face_landmarks, 42) - x4, y4 = get_landmark(face_landmarks, 45) - - x_nose, y_nose = get_landmark(face_landmarks, 30) - - x_left_mouth, y_left_mouth = get_landmark(face_landmarks, 48) - x_right_mouth, y_right_mouth = get_landmark(face_landmarks, 54) - - x_left_eye = int((x1 + x2) / 2) - y_left_eye = int((y1 + y2) / 2) - x_right_eye = int((x3 + x4) / 2) - y_right_eye = int((y3 + y4) / 2) - - results = np.array( - [ - [x_left_eye, y_left_eye], - [x_right_eye, y_right_eye], - [x_nose, y_nose], - [x_left_mouth, y_left_mouth], - [x_right_mouth, y_right_mouth], - ] - ) - - return results - - -if __name__ == "__main__": - - parser = argparse.ArgumentParser() - parser.add_argument("--origin_url", type=str, default="./", help="origin images") - parser.add_argument("--replace_url", type=str, default="./", help="restored faces") - parser.add_argument("--save_url", type=str, default="./save") - opts = parser.parse_args() - - origin_url = opts.origin_url - replace_url = opts.replace_url - save_url = opts.save_url - - if not os.path.exists(save_url): - os.makedirs(save_url) - - face_detector = dlib.get_frontal_face_detector() - landmark_locator = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat") - - count = 0 - - for x in os.listdir(origin_url): - img_url = os.path.join(origin_url, x) - pil_img = Image.open(img_url).convert("RGB") - - origin_width, origin_height = pil_img.size - image = np.array(pil_img) - - start = time.time() - faces = face_detector(image) - done = time.time() - - if len(faces) == 0: - print("Warning: There is no face in %s" % (x)) - continue - - blended = image - for face_id in range(len(faces)): - - current_face = faces[face_id] - face_landmarks = landmark_locator(image, current_face) - current_fl = search(face_landmarks) - - forward_mask = np.ones_like(image).astype("uint8") - affine = compute_transformation_matrix(image, current_fl, False, target_face_scale=1.3) - aligned_face = warp(image, affine, output_shape=(512, 512, 3), preserve_range=True) - forward_mask = warp( - forward_mask, affine, output_shape=(512, 512, 3), order=0, preserve_range=True - ) - - affine_inverse = affine.inverse - cur_face = aligned_face - if replace_url != "": - - face_name = x[:-4] + "_" + str(face_id + 1) + ".png" - cur_url = os.path.join(replace_url, face_name) - restored_face = Image.open(cur_url).convert("RGB") - restored_face = np.array(restored_face) - cur_face = restored_face - - ## Histogram Color matching - A = cv2.cvtColor(aligned_face.astype("uint8"), cv2.COLOR_RGB2BGR) - B = cv2.cvtColor(cur_face.astype("uint8"), cv2.COLOR_RGB2BGR) - B = match_histograms(B, A) - cur_face = cv2.cvtColor(B.astype("uint8"), cv2.COLOR_BGR2RGB) - - warped_back = warp( - cur_face, - affine_inverse, - output_shape=(origin_height, origin_width, 3), - order=3, - preserve_range=True, - ) - - backward_mask = warp( - forward_mask, - affine_inverse, - output_shape=(origin_height, origin_width, 3), - order=0, - preserve_range=True, - ) ## Nearest neighbour - - blended = blur_blending_cv2(warped_back, blended, backward_mask) - blended *= 255.0 - - io.imsave(os.path.join(save_url, x), img_as_ubyte(blended / 255.0)) - - count += 1 - - if count % 1000 == 0: - print("%d have finished ..." % (count)) - diff --git a/spaces/manhkhanhUIT/BOPBTL/Global/data/custom_dataset_data_loader.py b/spaces/manhkhanhUIT/BOPBTL/Global/data/custom_dataset_data_loader.py deleted file mode 100644 index 04cc03203f216bb931eefb29b0c71c3dedaadae0..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/BOPBTL/Global/data/custom_dataset_data_loader.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import torch.utils.data -import random -from data.base_data_loader import BaseDataLoader -from data import online_dataset_for_old_photos as dts_ray_bigfile - - -def CreateDataset(opt): - dataset = None - if opt.training_dataset=='domain_A' or opt.training_dataset=='domain_B': - dataset = dts_ray_bigfile.UnPairOldPhotos_SR() - if opt.training_dataset=='mapping': - if opt.random_hole: - dataset = dts_ray_bigfile.PairOldPhotos_with_hole() - else: - dataset = dts_ray_bigfile.PairOldPhotos() - print("dataset [%s] was created" % (dataset.name())) - dataset.initialize(opt) - return dataset - -class CustomDatasetDataLoader(BaseDataLoader): - def name(self): - return 'CustomDatasetDataLoader' - - def initialize(self, opt): - BaseDataLoader.initialize(self, opt) - self.dataset = CreateDataset(opt) - self.dataloader = torch.utils.data.DataLoader( - self.dataset, - batch_size=opt.batchSize, - shuffle=not opt.serial_batches, - num_workers=int(opt.nThreads), - drop_last=True) - - def load_data(self): - return self.dataloader - - def __len__(self): - return min(len(self.dataset), self.opt.max_dataset_size) diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/util/util.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/util/util.py deleted file mode 100644 index e18b4a26082449977b27a4c1506649a2447988b1..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Face_Enhancement/util/util.py +++ /dev/null @@ -1,210 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import re -import importlib -import torch -from argparse import Namespace -import numpy as np -from PIL import Image -import os -import argparse -import dill as pickle - - -def save_obj(obj, name): - with open(name, "wb") as f: - pickle.dump(obj, f, pickle.HIGHEST_PROTOCOL) - - -def load_obj(name): - with open(name, "rb") as f: - return pickle.load(f) - - -def copyconf(default_opt, **kwargs): - conf = argparse.Namespace(**vars(default_opt)) - for key in kwargs: - print(key, kwargs[key]) - setattr(conf, key, kwargs[key]) - return conf - - -# Converts a Tensor into a Numpy array -# |imtype|: the desired type of the converted numpy array -def tensor2im(image_tensor, imtype=np.uint8, normalize=True, tile=False): - if isinstance(image_tensor, list): - image_numpy = [] - for i in range(len(image_tensor)): - image_numpy.append(tensor2im(image_tensor[i], imtype, normalize)) - return image_numpy - - if image_tensor.dim() == 4: - # transform each image in the batch - images_np = [] - for b in range(image_tensor.size(0)): - one_image = image_tensor[b] - one_image_np = tensor2im(one_image) - images_np.append(one_image_np.reshape(1, *one_image_np.shape)) - images_np = np.concatenate(images_np, axis=0) - - return images_np - - if image_tensor.dim() == 2: - image_tensor = image_tensor.unsqueeze(0) - image_numpy = image_tensor.detach().cpu().float().numpy() - if normalize: - image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + 1) / 2.0 * 255.0 - else: - image_numpy = np.transpose(image_numpy, (1, 2, 0)) * 255.0 - image_numpy = np.clip(image_numpy, 0, 255) - if image_numpy.shape[2] == 1: - image_numpy = image_numpy[:, :, 0] - return image_numpy.astype(imtype) - - -# Converts a one-hot tensor into a colorful label map -def tensor2label(label_tensor, n_label, imtype=np.uint8, tile=False): - if label_tensor.dim() == 4: - # transform each image in the batch - images_np = [] - for b in range(label_tensor.size(0)): - one_image = label_tensor[b] - one_image_np = tensor2label(one_image, n_label, imtype) - images_np.append(one_image_np.reshape(1, *one_image_np.shape)) - images_np = np.concatenate(images_np, axis=0) - # if tile: - # images_tiled = tile_images(images_np) - # return images_tiled - # else: - # images_np = images_np[0] - # return images_np - return images_np - - if label_tensor.dim() == 1: - return np.zeros((64, 64, 3), dtype=np.uint8) - if n_label == 0: - return tensor2im(label_tensor, imtype) - label_tensor = label_tensor.cpu().float() - if label_tensor.size()[0] > 1: - label_tensor = label_tensor.max(0, keepdim=True)[1] - label_tensor = Colorize(n_label)(label_tensor) - label_numpy = np.transpose(label_tensor.numpy(), (1, 2, 0)) - result = label_numpy.astype(imtype) - return result - - -def save_image(image_numpy, image_path, create_dir=False): - if create_dir: - os.makedirs(os.path.dirname(image_path), exist_ok=True) - if len(image_numpy.shape) == 2: - image_numpy = np.expand_dims(image_numpy, axis=2) - if image_numpy.shape[2] == 1: - image_numpy = np.repeat(image_numpy, 3, 2) - image_pil = Image.fromarray(image_numpy) - - # save to png - image_pil.save(image_path.replace(".jpg", ".png")) - - -def mkdirs(paths): - if isinstance(paths, list) and not isinstance(paths, str): - for path in paths: - mkdir(path) - else: - mkdir(paths) - - -def mkdir(path): - if not os.path.exists(path): - os.makedirs(path) - - -def atoi(text): - return int(text) if text.isdigit() else text - - -def natural_keys(text): - """ - alist.sort(key=natural_keys) sorts in human order - http://nedbatchelder.com/blog/200712/human_sorting.html - (See Toothy's implementation in the comments) - """ - return [atoi(c) for c in re.split("(\d+)", text)] - - -def natural_sort(items): - items.sort(key=natural_keys) - - -def str2bool(v): - if v.lower() in ("yes", "true", "t", "y", "1"): - return True - elif v.lower() in ("no", "false", "f", "n", "0"): - return False - else: - raise argparse.ArgumentTypeError("Boolean value expected.") - - -def find_class_in_module(target_cls_name, module): - target_cls_name = target_cls_name.replace("_", "").lower() - clslib = importlib.import_module(module) - cls = None - for name, clsobj in clslib.__dict__.items(): - if name.lower() == target_cls_name: - cls = clsobj - - if cls is None: - print( - "In %s, there should be a class whose name matches %s in lowercase without underscore(_)" - % (module, target_cls_name) - ) - exit(0) - - return cls - - -def save_network(net, label, epoch, opt): - save_filename = "%s_net_%s.pth" % (epoch, label) - save_path = os.path.join(opt.checkpoints_dir, opt.name, save_filename) - torch.save(net.cpu().state_dict(), save_path) - if len(opt.gpu_ids) and torch.cuda.is_available(): - net.cuda() - - -def load_network(net, label, epoch, opt): - save_filename = "%s_net_%s.pth" % (epoch, label) - save_dir = os.path.join(opt.checkpoints_dir, opt.name) - save_path = os.path.join(save_dir, save_filename) - if os.path.exists(save_path): - weights = torch.load(save_path) - net.load_state_dict(weights) - return net - - -############################################################################### -# Code from -# https://github.com/ycszen/pytorch-seg/blob/master/transform.py -# Modified so it complies with the Citscape label map colors -############################################################################### -def uint82bin(n, count=8): - """returns the binary of integer n, count refers to amount of bits""" - return "".join([str((n >> y) & 1) for y in range(count - 1, -1, -1)]) - - -class Colorize(object): - def __init__(self, n=35): - self.cmap = labelcolormap(n) - self.cmap = torch.from_numpy(self.cmap[:n]) - - def __call__(self, gray_image): - size = gray_image.size() - color_image = torch.ByteTensor(3, size[1], size[2]).fill_(0) - - for label in range(0, len(self.cmap)): - mask = (label == gray_image[0]).cpu() - color_image[0][mask] = self.cmap[label][0] - color_image[1][mask] = self.cmap[label][1] - color_image[2][mask] = self.cmap[label][2] - - return color_image diff --git a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/data/online_dataset_for_old_photos.py b/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/data/online_dataset_for_old_photos.py deleted file mode 100644 index 068410a93eb10d5f00e694fd890f8aaa069526a3..0000000000000000000000000000000000000000 --- a/spaces/manhkhanhUIT/Image_Restoration_Colorization/Global/data/online_dataset_for_old_photos.py +++ /dev/null @@ -1,485 +0,0 @@ -# Copyright (c) Microsoft Corporation. -# Licensed under the MIT License. - -import os.path -import io -import zipfile -from data.base_dataset import BaseDataset, get_params, get_transform, normalize -from data.image_folder import make_dataset -from PIL import Image -import torchvision.transforms as transforms -import numpy as np -from data.Load_Bigfile import BigFileMemoryLoader -import random -import cv2 -from io import BytesIO - -def pil_to_np(img_PIL): - '''Converts image in PIL format to np.array. - - From W x H x C [0...255] to C x W x H [0..1] - ''' - ar = np.array(img_PIL) - - if len(ar.shape) == 3: - ar = ar.transpose(2, 0, 1) - else: - ar = ar[None, ...] - - return ar.astype(np.float32) / 255. - - -def np_to_pil(img_np): - '''Converts image in np.array format to PIL image. - - From C x W x H [0..1] to W x H x C [0...255] - ''' - ar = np.clip(img_np * 255, 0, 255).astype(np.uint8) - - if img_np.shape[0] == 1: - ar = ar[0] - else: - ar = ar.transpose(1, 2, 0) - - return Image.fromarray(ar) - -def synthesize_salt_pepper(image,amount,salt_vs_pepper): - - ## Give PIL, return the noisy PIL - - img_pil=pil_to_np(image) - - out = img_pil.copy() - p = amount - q = salt_vs_pepper - flipped = np.random.choice([True, False], size=img_pil.shape, - p=[p, 1 - p]) - salted = np.random.choice([True, False], size=img_pil.shape, - p=[q, 1 - q]) - peppered = ~salted - out[flipped & salted] = 1 - out[flipped & peppered] = 0. - noisy = np.clip(out, 0, 1).astype(np.float32) - - - return np_to_pil(noisy) - -def synthesize_gaussian(image,std_l,std_r): - - ## Give PIL, return the noisy PIL - - img_pil=pil_to_np(image) - - mean=0 - std=random.uniform(std_l/255.,std_r/255.) - gauss=np.random.normal(loc=mean,scale=std,size=img_pil.shape) - noisy=img_pil+gauss - noisy=np.clip(noisy,0,1).astype(np.float32) - - return np_to_pil(noisy) - -def synthesize_speckle(image,std_l,std_r): - - ## Give PIL, return the noisy PIL - - img_pil=pil_to_np(image) - - mean=0 - std=random.uniform(std_l/255.,std_r/255.) - gauss=np.random.normal(loc=mean,scale=std,size=img_pil.shape) - noisy=img_pil+gauss*img_pil - noisy=np.clip(noisy,0,1).astype(np.float32) - - return np_to_pil(noisy) - - -def synthesize_low_resolution(img): - w,h=img.size - - new_w=random.randint(int(w/2),w) - new_h=random.randint(int(h/2),h) - - img=img.resize((new_w,new_h),Image.BICUBIC) - - if random.uniform(0,1)<0.5: - img=img.resize((w,h),Image.NEAREST) - else: - img = img.resize((w, h), Image.BILINEAR) - - return img - - -def convertToJpeg(im,quality): - with BytesIO() as f: - im.save(f, format='JPEG',quality=quality) - f.seek(0) - return Image.open(f).convert('RGB') - - -def blur_image_v2(img): - - - x=np.array(img) - kernel_size_candidate=[(3,3),(5,5),(7,7)] - kernel_size=random.sample(kernel_size_candidate,1)[0] - std=random.uniform(1.,5.) - - #print("The gaussian kernel size: (%d,%d) std: %.2f"%(kernel_size[0],kernel_size[1],std)) - blur=cv2.GaussianBlur(x,kernel_size,std) - - return Image.fromarray(blur.astype(np.uint8)) - -def online_add_degradation_v2(img): - - task_id=np.random.permutation(4) - - for x in task_id: - if x==0 and random.uniform(0,1)<0.7: - img = blur_image_v2(img) - if x==1 and random.uniform(0,1)<0.7: - flag = random.choice([1, 2, 3]) - if flag == 1: - img = synthesize_gaussian(img, 5, 50) - if flag == 2: - img = synthesize_speckle(img, 5, 50) - if flag == 3: - img = synthesize_salt_pepper(img, random.uniform(0, 0.01), random.uniform(0.3, 0.8)) - if x==2 and random.uniform(0,1)<0.7: - img=synthesize_low_resolution(img) - - if x==3 and random.uniform(0,1)<0.7: - img=convertToJpeg(img,random.randint(40,100)) - - return img - - -def irregular_hole_synthesize(img,mask): - - img_np=np.array(img).astype('uint8') - mask_np=np.array(mask).astype('uint8') - mask_np=mask_np/255 - img_new=img_np*(1-mask_np)+mask_np*255 - - - hole_img=Image.fromarray(img_new.astype('uint8')).convert("RGB") - - return hole_img,mask.convert("L") - -def zero_mask(size): - x=np.zeros((size,size,3)).astype('uint8') - mask=Image.fromarray(x).convert("RGB") - return mask - - - -class UnPairOldPhotos_SR(BaseDataset): ## Synthetic + Real Old - def initialize(self, opt): - self.opt = opt - self.isImage = 'domainA' in opt.name - self.task = 'old_photo_restoration_training_vae' - self.dir_AB = opt.dataroot - if self.isImage: - - self.load_img_dir_L_old=os.path.join(self.dir_AB,"Real_L_old.bigfile") - self.load_img_dir_RGB_old=os.path.join(self.dir_AB,"Real_RGB_old.bigfile") - self.load_img_dir_clean=os.path.join(self.dir_AB,"VOC_RGB_JPEGImages.bigfile") - - self.loaded_imgs_L_old=BigFileMemoryLoader(self.load_img_dir_L_old) - self.loaded_imgs_RGB_old=BigFileMemoryLoader(self.load_img_dir_RGB_old) - self.loaded_imgs_clean=BigFileMemoryLoader(self.load_img_dir_clean) - - else: - # self.load_img_dir_clean=os.path.join(self.dir_AB,self.opt.test_dataset) - self.load_img_dir_clean=os.path.join(self.dir_AB,"VOC_RGB_JPEGImages.bigfile") - self.loaded_imgs_clean=BigFileMemoryLoader(self.load_img_dir_clean) - - #### - print("-------------Filter the imgs whose size <256 in VOC-------------") - self.filtered_imgs_clean=[] - for i in range(len(self.loaded_imgs_clean)): - img_name,img=self.loaded_imgs_clean[i] - h,w=img.size - if h<256 or w<256: - continue - self.filtered_imgs_clean.append((img_name,img)) - - print("--------Origin image num is [%d], filtered result is [%d]--------" % ( - len(self.loaded_imgs_clean), len(self.filtered_imgs_clean))) - ## Filter these images whose size is less than 256 - - # self.img_list=os.listdir(load_img_dir) - self.pid = os.getpid() - - def __getitem__(self, index): - - - is_real_old=0 - - sampled_dataset=None - degradation=None - if self.isImage: ## domain A , contains 2 kinds of data: synthetic + real_old - P=random.uniform(0,2) - if P>=0 and P<1: - if random.uniform(0,1)<0.5: - sampled_dataset=self.loaded_imgs_L_old - self.load_img_dir=self.load_img_dir_L_old - else: - sampled_dataset=self.loaded_imgs_RGB_old - self.load_img_dir=self.load_img_dir_RGB_old - is_real_old=1 - if P>=1 and P<2: - sampled_dataset=self.filtered_imgs_clean - self.load_img_dir=self.load_img_dir_clean - degradation=1 - else: - - sampled_dataset=self.filtered_imgs_clean - self.load_img_dir=self.load_img_dir_clean - - sampled_dataset_len=len(sampled_dataset) - - index=random.randint(0,sampled_dataset_len-1) - - img_name,img = sampled_dataset[index] - - if degradation is not None: - img=online_add_degradation_v2(img) - - path=os.path.join(self.load_img_dir,img_name) - - # AB = Image.open(path).convert('RGB') - # split AB image into A and B - - # apply the same transform to both A and B - - if random.uniform(0,1) <0.1: - img=img.convert("L") - img=img.convert("RGB") - ## Give a probability P, we convert the RGB image into L - - - A=img - w,h=A.size - if w<256 or h<256: - A=transforms.Scale(256,Image.BICUBIC)(A) - ## Since we want to only crop the images (256*256), for those old photos whose size is smaller than 256, we first resize them. - - transform_params = get_params(self.opt, A.size) - A_transform = get_transform(self.opt, transform_params) - - B_tensor = inst_tensor = feat_tensor = 0 - A_tensor = A_transform(A) - - - input_dict = {'label': A_tensor, 'inst': is_real_old, 'image': A_tensor, - 'feat': feat_tensor, 'path': path} - return input_dict - - def __len__(self): - return len(self.loaded_imgs_clean) ## actually, this is useless, since the selected index is just a random number - - def name(self): - return 'UnPairOldPhotos_SR' - - -class PairOldPhotos(BaseDataset): - def initialize(self, opt): - self.opt = opt - self.isImage = 'imagegan' in opt.name - self.task = 'old_photo_restoration_training_mapping' - self.dir_AB = opt.dataroot - if opt.isTrain: - self.load_img_dir_clean= os.path.join(self.dir_AB, "VOC_RGB_JPEGImages.bigfile") - self.loaded_imgs_clean = BigFileMemoryLoader(self.load_img_dir_clean) - - print("-------------Filter the imgs whose size <256 in VOC-------------") - self.filtered_imgs_clean = [] - for i in range(len(self.loaded_imgs_clean)): - img_name, img = self.loaded_imgs_clean[i] - h, w = img.size - if h < 256 or w < 256: - continue - self.filtered_imgs_clean.append((img_name, img)) - - print("--------Origin image num is [%d], filtered result is [%d]--------" % ( - len(self.loaded_imgs_clean), len(self.filtered_imgs_clean))) - - else: - self.load_img_dir=os.path.join(self.dir_AB,opt.test_dataset) - self.loaded_imgs=BigFileMemoryLoader(self.load_img_dir) - - self.pid = os.getpid() - - def __getitem__(self, index): - - - - if self.opt.isTrain: - img_name_clean,B = self.filtered_imgs_clean[index] - path = os.path.join(self.load_img_dir_clean, img_name_clean) - if self.opt.use_v2_degradation: - A=online_add_degradation_v2(B) - ### Remind: A is the input and B is corresponding GT - else: - - if self.opt.test_on_synthetic: - - img_name_B,B=self.loaded_imgs[index] - A=online_add_degradation_v2(B) - img_name_A=img_name_B - path = os.path.join(self.load_img_dir, img_name_A) - else: - img_name_A,A=self.loaded_imgs[index] - img_name_B,B=self.loaded_imgs[index] - path = os.path.join(self.load_img_dir, img_name_A) - - - if random.uniform(0,1)<0.1 and self.opt.isTrain: - A=A.convert("L") - B=B.convert("L") - A=A.convert("RGB") - B=B.convert("RGB") - ## In P, we convert the RGB into L - - - ##test on L - - # split AB image into A and B - # w, h = img.size - # w2 = int(w / 2) - # A = img.crop((0, 0, w2, h)) - # B = img.crop((w2, 0, w, h)) - w,h=A.size - if w<256 or h<256: - A=transforms.Scale(256,Image.BICUBIC)(A) - B=transforms.Scale(256, Image.BICUBIC)(B) - - # apply the same transform to both A and B - transform_params = get_params(self.opt, A.size) - A_transform = get_transform(self.opt, transform_params) - B_transform = get_transform(self.opt, transform_params) - - B_tensor = inst_tensor = feat_tensor = 0 - A_tensor = A_transform(A) - B_tensor = B_transform(B) - - input_dict = {'label': A_tensor, 'inst': inst_tensor, 'image': B_tensor, - 'feat': feat_tensor, 'path': path} - return input_dict - - def __len__(self): - - if self.opt.isTrain: - return len(self.filtered_imgs_clean) - else: - return len(self.loaded_imgs) - - def name(self): - return 'PairOldPhotos' - - -class PairOldPhotos_with_hole(BaseDataset): - def initialize(self, opt): - self.opt = opt - self.isImage = 'imagegan' in opt.name - self.task = 'old_photo_restoration_training_mapping' - self.dir_AB = opt.dataroot - if opt.isTrain: - self.load_img_dir_clean= os.path.join(self.dir_AB, "VOC_RGB_JPEGImages.bigfile") - self.loaded_imgs_clean = BigFileMemoryLoader(self.load_img_dir_clean) - - print("-------------Filter the imgs whose size <256 in VOC-------------") - self.filtered_imgs_clean = [] - for i in range(len(self.loaded_imgs_clean)): - img_name, img = self.loaded_imgs_clean[i] - h, w = img.size - if h < 256 or w < 256: - continue - self.filtered_imgs_clean.append((img_name, img)) - - print("--------Origin image num is [%d], filtered result is [%d]--------" % ( - len(self.loaded_imgs_clean), len(self.filtered_imgs_clean))) - - else: - self.load_img_dir=os.path.join(self.dir_AB,opt.test_dataset) - self.loaded_imgs=BigFileMemoryLoader(self.load_img_dir) - - self.loaded_masks = BigFileMemoryLoader(opt.irregular_mask) - - self.pid = os.getpid() - - def __getitem__(self, index): - - - - if self.opt.isTrain: - img_name_clean,B = self.filtered_imgs_clean[index] - path = os.path.join(self.load_img_dir_clean, img_name_clean) - - - B=transforms.RandomCrop(256)(B) - A=online_add_degradation_v2(B) - ### Remind: A is the input and B is corresponding GT - - else: - img_name_A,A=self.loaded_imgs[index] - img_name_B,B=self.loaded_imgs[index] - path = os.path.join(self.load_img_dir, img_name_A) - - #A=A.resize((256,256)) - A=transforms.CenterCrop(256)(A) - B=A - - if random.uniform(0,1)<0.1 and self.opt.isTrain: - A=A.convert("L") - B=B.convert("L") - A=A.convert("RGB") - B=B.convert("RGB") - ## In P, we convert the RGB into L - - if self.opt.isTrain: - mask_name,mask=self.loaded_masks[random.randint(0,len(self.loaded_masks)-1)] - else: - mask_name, mask = self.loaded_masks[index%100] - mask = mask.resize((self.opt.loadSize, self.opt.loadSize), Image.NEAREST) - - if self.opt.random_hole and random.uniform(0,1)>0.5 and self.opt.isTrain: - mask=zero_mask(256) - - if self.opt.no_hole: - mask=zero_mask(256) - - - A,_=irregular_hole_synthesize(A,mask) - - if not self.opt.isTrain and self.opt.hole_image_no_mask: - mask=zero_mask(256) - - transform_params = get_params(self.opt, A.size) - A_transform = get_transform(self.opt, transform_params) - B_transform = get_transform(self.opt, transform_params) - - if transform_params['flip'] and self.opt.isTrain: - mask=mask.transpose(Image.FLIP_LEFT_RIGHT) - - mask_tensor = transforms.ToTensor()(mask) - - - B_tensor = inst_tensor = feat_tensor = 0 - A_tensor = A_transform(A) - B_tensor = B_transform(B) - - input_dict = {'label': A_tensor, 'inst': mask_tensor[:1], 'image': B_tensor, - 'feat': feat_tensor, 'path': path} - return input_dict - - def __len__(self): - - if self.opt.isTrain: - return len(self.filtered_imgs_clean) - - else: - return len(self.loaded_imgs) - - def name(self): - return 'PairOldPhotos_with_hole' \ No newline at end of file diff --git a/spaces/masterzer0456/Ai1/README.md b/spaces/masterzer0456/Ai1/README.md deleted file mode 100644 index 284685b8791108832729d49e6b38ed60976050aa..0000000000000000000000000000000000000000 --- a/spaces/masterzer0456/Ai1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Ai1 -emoji: 💻 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/masterzer0456/Ai1/app.py b/spaces/masterzer0456/Ai1/app.py deleted file mode 100644 index bc2de4de0a627b4227fec9e6fb01ba74017ae80c..0000000000000000000000000000000000000000 --- a/spaces/masterzer0456/Ai1/app.py +++ /dev/null @@ -1,51 +0,0 @@ -import os -import uuid -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """Meet SAMAYA, your youthful and supportive personal assistant who can understand mutiple languages! At 21 years old, she's full of energy and -always eager to help. SAMAYA's goal is to assist you with any problems you have the ease of lodging a complaint or grievance by citizens is often lacking in many lndian cities. -She comprehends the user's issue and identifies the department from the options listed below that is most appropriate for addressing the problem. She then responds with the name of the department to which the problem pertains and generates a random 12-digit token ID for the lodged problem, reply will be like "Your concern falls under the purview of the 'X' department, and the token ID for tracking your resolution is 'Y'."Y should be changing every time even if the user input is same. -Departments are -1-Transport Department - The Transport Department in India is a vital government agency responsible for regulating and overseeing the country's transportation systems. It plays a pivotal role in ensuring safe and efficient mobility for citizens, in accordance with the Indian Constitution. The department manages various aspects of transportation, including road safety, vehicle registration, issuance of driving licenses, and public transportation regulations. It also collaborates with other departments to uphold constitutional principles like the right to freedom of movement and equal protection under the law. Furthermore, it is responsible for implementing policies that promote sustainable and eco-friendly transportation methods, aligning with the constitutional directive principles of state policy. -2-Education and Learning - The Department of Education and Learning in India is instrumental in upholding the principles laid out in the Indian Constitution. It is responsible for the development, regulation, and enhancement of the education system, ensuring the right to education for all citizens, as enshrined in the Constitution. This department oversees the curriculum, educational infrastructure, and policies aimed at promoting inclusivity and equality in education. It also plays a vital role in the realization of constitutional values, including promoting social justice and providing opportunities for all, irrespective of caste, creed, or gender. Additionally, the department aligns its efforts with constitutional ideals by fostering a culture of knowledge and innovation in the nation. -3-Health & Wellness - The Department of Health and Wellness in India is integral to the realization of key constitutional principles. It is responsible for safeguarding the right to health, which is an essential part of the right to life, as guaranteed by the Indian Constitution. This department oversees healthcare policies, public health programs, and the establishment of healthcare infrastructure to ensure access to quality healthcare services for all citizens. It also aligns with constitutional principles by promoting the welfare of the people and fostering a state of physical and mental well-being. Additionally, the department upholds the constitutional goal of establishing a just and equitable social order by addressing health disparities and ensuring healthcare is accessible to all, regardless of socioeconomic status. -4-Environment Department - The Environment Department in India is instrumental in upholding constitutional values related to the environment and sustainable development. It is responsible for implementing policies and regulations to protect and conserve the environment, in accordance with the principles of the Indian Constitution. This department plays a key role in realizing the constitutional directive principles of state policy, which call for the protection of natural resources and ensuring a healthy environment for citizens. It also works to promote intergenerational equity, aligning with the constitutional vision of leaving a sustainable planet for future generations. Furthermore, the Environment Department is crucial in achieving a just and balanced approach to development while considering environmental factors, in accordance with the constitutional mandate of promoting social and economic justice. -5-Finance Department - The Finance Department in India is a critical government agency that plays a pivotal role in managing the country's finances in alignment with the Indian Constitution. It is responsible for financial planning, budgeting, and resource allocation, ensuring fiscal prudence and transparency in accordance with constitutional principles. This department upholds constitutional values by managing public funds efficiently, reducing economic inequalities, and promoting economic growth and social justice. It also works to ensure the equitable distribution of resources among states, aligning with the constitutional directive principles of state policy. Furthermore, it supports the constitutional mandate of promoting the welfare of the people by allocating resources for essential services and infrastructure development. -6-Agriculture Department - The Agriculture Department in India is a fundamental component of the government machinery that aligns with constitutional principles. It is responsible for the development and regulation of the agricultural sector, ensuring food security, employment generation, and economic prosperity, which are vital constitutional goals. This department plays a key role in realizing the constitutional directive principles of state policy by promoting sustainable agricultural practices, equitable distribution of land, and protection of the interests of farmers. It contributes to the constitutional vision of promoting the welfare of the people by enhancing the livelihoods of millions of farmers and fostering rural development. Additionally, it is essential for achieving food sovereignty and ensuring access to safe and nutritious food, in accordance with the constitutional guarantee of the right to life. -7-Housing and Urban Development - The Housing and Urban Development Department in India is a crucial governmental body that aligns with constitutional principles. It is responsible for urban planning, housing policies, and infrastructure development, ensuring sustainable and inclusive urban growth in accordance with the Indian Constitution. This department contributes to the constitutional directive principles of state policy by working towards providing adequate housing, improving living conditions, and promoting the welfare of urban and rural populations. It also supports constitutional goals of promoting social justice by addressing housing disparities and improving access to basic amenities in urban areas. Additionally, it plays a vital role in fostering economic development and environmental sustainability, which are in line with the constitutional mandate of balancing economic growth with environmental conservation. -8-Banking and Insurance - The Banking and Insurance sector in India is vital for the country's economic development and financial stability, in alignment with the principles of the Indian Constitution. It plays a key role in realizing the constitutional directive principles of state policy by promoting economic equality and ensuring that the operation of the economic system does not result in the concentration of wealth and resources in the hands of a few. This sector is responsible for regulating and facilitating the functioning of banks and insurance companies, which in turn safeguard the interests of depositors and policyholders. Additionally, it contributes to the constitutional goal of promoting the welfare of the people by providing financial services and risk protection to individuals and businesses, supporting economic growth and social well-being. -9-Commerce Department - The Commerce Department in India is a crucial government agency that aligns with constitutional principles. It is responsible for formulating and implementing trade and commerce policies, promoting economic growth, and facilitating international trade in accordance with the Indian Constitution. This department plays a key role in realizing the constitutional directive principles of state policy by fostering economic development and ensuring equitable distribution of wealth and resources. It also supports constitutional goals of promoting social justice by creating opportunities for trade and commerce that benefit all sections of society. Furthermore, it contributes to the constitutional vision of promoting economic prosperity and the welfare of the people by enhancing trade relations and facilitating economic activities both within the country and globally. -10-Water Department - The Water Department in India is instrumental in upholding constitutional values related to water resources and environmental sustainability. It is responsible for managing, conserving, and regulating water resources, aligning with the principles of the Indian Constitution. This department plays a crucial role in realizing the constitutional directive principles of state policy by ensuring the equitable distribution and utilization of water resources, which is essential for the welfare of the people. It also contributes to the constitutional mandate of promoting the conservation of natural resources and the protection of the environment. Furthermore, the Water Department supports the constitutional goal of fostering a just and balanced approach to development, considering the sustainable use of water resources for the benefit of present and future generations. -11-Electric Department - In India, the Electric Department, often referred to as the Ministry of Power or the Power Department, is a critical government agency responsible for the generation, distribution, and regulation of electricity in alignment with the Indian Constitution. This department plays a key role in realizing the constitutional directive principles of state policy by ensuring the availability of electricity as an essential service to all citizens, promoting economic growth, and improving the quality of life. It also supports constitutional goals by fostering economic development, creating opportunities for employment, and enhancing the welfare of the people. Furthermore, it contributes to the constitutional vision of promoting energy sustainability and conservation, which is crucial for India's future development. -If any questions are irrespective to the government department, then respond 'I apologize for any inconvenience. It's irrelevant.''. -{chat_history} -Chatbot:""""Hello, I'm Samaya,A solutions gateway for your problems."""" -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/matjesg/deepflash2/app_onnx.py b/spaces/matjesg/deepflash2/app_onnx.py deleted file mode 100644 index c9182599adf01a9be2364d68dea1cd0232d69ea6..0000000000000000000000000000000000000000 --- a/spaces/matjesg/deepflash2/app_onnx.py +++ /dev/null @@ -1,43 +0,0 @@ -import numpy as np -import gradio as gr -import onnxruntime as ort -from matplotlib import pyplot as plt -from huggingface_hub import hf_hub_download - -def create_model_for_provider(model_path, provider="CPUExecutionProvider"): - options = ort.SessionOptions() - options.intra_op_num_threads = 1 - options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL - session = ort.InferenceSession(str(model_path), options, providers=[provider]) - session.disable_fallback() - return session - -def inference(repo_id, model_name, img): - model = hf_hub_download(repo_id=repo_id, filename=model_name) - ort_session = create_model_for_provider(model) - n_channels = ort_session.get_inputs()[0].shape[-1] - - img = img[...,:n_channels]/255 - ort_inputs = {ort_session.get_inputs()[0].name: img.astype(np.float32)} - - ort_outs = ort_session.run(None, ort_inputs) - - return ort_outs[0]*255, ort_outs[2]/0.25 - -title="deepflash2" -description='deepflash2 is a deep-learning pipeline for the segmentation of ambiguous microscopic images.\n deepflash2 uses deep model ensembles to achieve more accurate and reliable results. Thus, inference time will be more than a minute in this space.' -examples=[['matjesg/deepflash2_demo', 'cFOS_ensemble.onnx', 'cFOS_example.png'], - ['matjesg/deepflash2_demo', 'YFP_ensemble.onnx', 'YFP_example.png'] - ] - -gr.Interface(inference, - [gr.inputs.Textbox(placeholder='e.g., matjesg/cFOS_in_HC', label='repo_id'), - gr.inputs.Textbox(placeholder='e.g., ensemble.onnx', label='model_name'), - gr.inputs.Image(type='numpy', label='Input image') - ], - [gr.outputs.Image(label='Segmentation Mask'), - gr.outputs.Image(label='Uncertainty Map')], - title=title, - description=description, - examples=examples, - ).launch() \ No newline at end of file diff --git a/spaces/matthoffner/chatbot-mini/components/Chatbar/components/ChatbarSettings.tsx b/spaces/matthoffner/chatbot-mini/components/Chatbar/components/ChatbarSettings.tsx deleted file mode 100644 index d54871e91fb78c85f8fe2394cad64f18f72d6e8a..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot-mini/components/Chatbar/components/ChatbarSettings.tsx +++ /dev/null @@ -1,57 +0,0 @@ -import { IconFileExport, IconSettings } from '@tabler/icons-react'; -import { useContext, useState } from 'react'; - -import { useTranslation } from 'next-i18next'; - -import HomeContext from '@/pages/api/home/home.context'; - -import { SettingDialog } from '@/components/Settings/SettingDialog'; - -import { Import } from '../../Settings/Import'; -import { Key } from '../../Settings/Key'; -import ChatbarContext from '../Chatbar.context'; -import { ClearConversations } from './ClearConversations'; - -export const ChatbarSettings = () => { - const { t } = useTranslation('sidebar'); - const [isSettingDialogOpen, setIsSettingDialog] = useState(false); - - const { - state: { - apiKey, - lightMode, - serverSideApiKeyIsSet, - serverSidePluginKeysSet, - conversations, - }, - dispatch: homeDispatch, - } = useContext(HomeContext); - - const { - handleClearConversations, - handleImportConversations, - handleExportData, - handleApiKeyChange, - } = useContext(ChatbarContext); - - return ( -
- {conversations.length > 0 ? ( - - ) : null} - - - - {!serverSideApiKeyIsSet ? ( - - ) : null} - - { - setIsSettingDialog(false); - }} - /> -
- ); -}; diff --git a/spaces/matthoffner/chatbot/utils/app/clean.ts b/spaces/matthoffner/chatbot/utils/app/clean.ts deleted file mode 100644 index 7130eef9c2437c30519d7334e96a5ee009795486..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/utils/app/clean.ts +++ /dev/null @@ -1,99 +0,0 @@ -import { Conversation } from '@/types/chat'; -import { OpenAIModelID, OpenAIModels } from '@/types/openai'; - -import { DEFAULT_SYSTEM_PROMPT, DEFAULT_TEMPERATURE } from './const'; - -export const cleanSelectedConversation = (conversation: Conversation) => { - // added model for each conversation (3/20/23) - // added system prompt for each conversation (3/21/23) - // added folders (3/23/23) - // added prompts (3/26/23) - // added messages (4/16/23) - - let updatedConversation = conversation; - - // check for model on each conversation - if (!updatedConversation.model) { - updatedConversation = { - ...updatedConversation, - model: updatedConversation.model || OpenAIModels[OpenAIModelID.GPT_3_5], - }; - } - - // check for system prompt on each conversation - if (!updatedConversation.prompt) { - updatedConversation = { - ...updatedConversation, - prompt: updatedConversation.prompt || DEFAULT_SYSTEM_PROMPT, - }; - } - - if (!updatedConversation.temperature) { - updatedConversation = { - ...updatedConversation, - temperature: updatedConversation.temperature || DEFAULT_TEMPERATURE, - }; - } - - if (!updatedConversation.folderId) { - updatedConversation = { - ...updatedConversation, - folderId: updatedConversation.folderId || null, - }; - } - - if (!updatedConversation.messages) { - updatedConversation = { - ...updatedConversation, - messages: updatedConversation.messages || [], - }; - } - - return updatedConversation; -}; - -export const cleanConversationHistory = (history: any[]): Conversation[] => { - // added model for each conversation (3/20/23) - // added system prompt for each conversation (3/21/23) - // added folders (3/23/23) - // added prompts (3/26/23) - // added messages (4/16/23) - - if (!Array.isArray(history)) { - console.warn('history is not an array. Returning an empty array.'); - return []; - } - - return history.reduce((acc: any[], conversation) => { - try { - if (!conversation.model) { - conversation.model = OpenAIModels[OpenAIModelID.GPT_3_5]; - } - - if (!conversation.prompt) { - conversation.prompt = DEFAULT_SYSTEM_PROMPT; - } - - if (!conversation.temperature) { - conversation.temperature = DEFAULT_TEMPERATURE; - } - - if (!conversation.folderId) { - conversation.folderId = null; - } - - if (!conversation.messages) { - conversation.messages = []; - } - - acc.push(conversation); - return acc; - } catch (error) { - console.warn( - `error while cleaning conversations' history. Removing culprit`, - error, - ); - } - return acc; - }, []); -}; diff --git a/spaces/matthoffner/starchat-ui/components/Settings/SettingDialog.tsx b/spaces/matthoffner/starchat-ui/components/Settings/SettingDialog.tsx deleted file mode 100644 index 004a9cf507695ec2f44bcc2dcf8ffe5e738d85b0..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/starchat-ui/components/Settings/SettingDialog.tsx +++ /dev/null @@ -1,105 +0,0 @@ -import { FC, useContext, useEffect, useReducer, useRef } from 'react'; - -import { useTranslation } from 'next-i18next'; - -import { useCreateReducer } from '@/hooks/useCreateReducer'; - -import { getSettings, saveSettings } from '@/utils/app/settings'; - -import { Settings } from '@/types/settings'; - -import HomeContext from '@/pages/api/home/home.context'; - -interface Props { - open: boolean; - onClose: () => void; -} - -export const SettingDialog: FC = ({ open, onClose }) => { - const { t } = useTranslation('settings'); - const settings: Settings = getSettings(); - const { state, dispatch } = useCreateReducer({ - initialState: settings, - }); - const { dispatch: homeDispatch } = useContext(HomeContext); - const modalRef = useRef(null); - - useEffect(() => { - const handleMouseDown = (e: MouseEvent) => { - if (modalRef.current && !modalRef.current.contains(e.target as Node)) { - window.addEventListener('mouseup', handleMouseUp); - } - }; - - const handleMouseUp = (e: MouseEvent) => { - window.removeEventListener('mouseup', handleMouseUp); - onClose(); - }; - - window.addEventListener('mousedown', handleMouseDown); - - return () => { - window.removeEventListener('mousedown', handleMouseDown); - }; - }, [onClose]); - - const handleSave = () => { - homeDispatch({ field: 'lightMode', value: state.theme }); - saveSettings(state); - }; - - // Render nothing if the dialog is not open. - if (!open) { - return <>; - } - - // Render the dialog. - return ( -
-
-
- -
-
- ); -}; diff --git a/spaces/megaaziib/hololive-rvc-models-v2/lib/infer_pack/commons.py b/spaces/megaaziib/hololive-rvc-models-v2/lib/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/megaaziib/hololive-rvc-models-v2/lib/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/merve/anonymization/public/third_party/d3_.js b/spaces/merve/anonymization/public/third_party/d3_.js deleted file mode 100644 index 9c4b6815ec3cdc0e9f8a072b2d05be7ad48fa703..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/public/third_party/d3_.js +++ /dev/null @@ -1,143 +0,0 @@ -/** - * @license - * Lodash lodash.com/license | Underscore.js 1.8.3 underscorejs.org/LICENSE - */ -;(function(){function n(n,t){return n.set(t[0],t[1]),n}function t(n,t){return n.add(t),n}function r(n,t,r){switch(r.length){case 0:return n.call(t);case 1:return n.call(t,r[0]);case 2:return n.call(t,r[0],r[1]);case 3:return n.call(t,r[0],r[1],r[2])}return n.apply(t,r)}function e(n,t,r,e){for(var u=-1,i=null==n?0:n.length;++u"']/g,J=RegExp(G.source),Y=RegExp(H.source),Q=/<%-([\s\S]+?)%>/g,X=/<%([\s\S]+?)%>/g,nn=/<%=([\s\S]+?)%>/g,tn=/\.|\[(?:[^[\]]*|(["'])(?:(?!\1)[^\\]|\\.)*?\1)\]/,rn=/^\w*$/,en=/^\./,un=/[^.[\]]+|\[(?:(-?\d+(?:\.\d+)?)|(["'])((?:(?!\2)[^\\]|\\.)*?)\2)\]|(?=(?:\.|\[\])(?:\.|\[\]|$))/g,on=/[\\^$.*+?()[\]{}|]/g,fn=RegExp(on.source),cn=/^\s+|\s+$/g,an=/^\s+/,ln=/\s+$/,sn=/\{(?:\n\/\* \[wrapped with .+\] \*\/)?\n?/,hn=/\{\n\/\* \[wrapped with (.+)\] \*/,pn=/,? & /,_n=/[^\x00-\x2f\x3a-\x40\x5b-\x60\x7b-\x7f]+/g,vn=/\\(\\)?/g,gn=/\$\{([^\\}]*(?:\\.[^\\}]*)*)\}/g,dn=/\w*$/,yn=/^[-+]0x[0-9a-f]+$/i,bn=/^0b[01]+$/i,xn=/^\[object .+?Constructor\]$/,jn=/^0o[0-7]+$/i,wn=/^(?:0|[1-9]\d*)$/,mn=/[\xc0-\xd6\xd8-\xf6\xf8-\xff\u0100-\u017f]/g,An=/($^)/,kn=/['\n\r\u2028\u2029\\]/g,En="[\\ufe0e\\ufe0f]?(?:[\\u0300-\\u036f\\ufe20-\\ufe2f\\u20d0-\\u20ff]|\\ud83c[\\udffb-\\udfff])?(?:\\u200d(?:[^\\ud800-\\udfff]|(?:\\ud83c[\\udde6-\\uddff]){2}|[\\ud800-\\udbff][\\udc00-\\udfff])[\\ufe0e\\ufe0f]?(?:[\\u0300-\\u036f\\ufe20-\\ufe2f\\u20d0-\\u20ff]|\\ud83c[\\udffb-\\udfff])?)*",On="(?:[\\u2700-\\u27bf]|(?:\\ud83c[\\udde6-\\uddff]){2}|[\\ud800-\\udbff][\\udc00-\\udfff])"+En,Sn="(?:[^\\ud800-\\udfff][\\u0300-\\u036f\\ufe20-\\ufe2f\\u20d0-\\u20ff]?|[\\u0300-\\u036f\\ufe20-\\ufe2f\\u20d0-\\u20ff]|(?:\\ud83c[\\udde6-\\uddff]){2}|[\\ud800-\\udbff][\\udc00-\\udfff]|[\\ud800-\\udfff])",In=RegExp("['\u2019]","g"),Rn=RegExp("[\\u0300-\\u036f\\ufe20-\\ufe2f\\u20d0-\\u20ff]","g"),zn=RegExp("\\ud83c[\\udffb-\\udfff](?=\\ud83c[\\udffb-\\udfff])|"+Sn+En,"g"),Wn=RegExp(["[A-Z\\xc0-\\xd6\\xd8-\\xde]?[a-z\\xdf-\\xf6\\xf8-\\xff]+(?:['\u2019](?:d|ll|m|re|s|t|ve))?(?=[\\xac\\xb1\\xd7\\xf7\\x00-\\x2f\\x3a-\\x40\\x5b-\\x60\\x7b-\\xbf\\u2000-\\u206f \\t\\x0b\\f\\xa0\\ufeff\\n\\r\\u2028\\u2029\\u1680\\u180e\\u2000\\u2001\\u2002\\u2003\\u2004\\u2005\\u2006\\u2007\\u2008\\u2009\\u200a\\u202f\\u205f\\u3000]|[A-Z\\xc0-\\xd6\\xd8-\\xde]|$)|(?:[A-Z\\xc0-\\xd6\\xd8-\\xde]|[^\\ud800-\\udfff\\xac\\xb1\\xd7\\xf7\\x00-\\x2f\\x3a-\\x40\\x5b-\\x60\\x7b-\\xbf\\u2000-\\u206f \\t\\x0b\\f\\xa0\\ufeff\\n\\r\\u2028\\u2029\\u1680\\u180e\\u2000\\u2001\\u2002\\u2003\\u2004\\u2005\\u2006\\u2007\\u2008\\u2009\\u200a\\u202f\\u205f\\u3000\\d+\\u2700-\\u27bfa-z\\xdf-\\xf6\\xf8-\\xffA-Z\\xc0-\\xd6\\xd8-\\xde])+(?:['\u2019](?:D|LL|M|RE|S|T|VE))?(?=[\\xac\\xb1\\xd7\\xf7\\x00-\\x2f\\x3a-\\x40\\x5b-\\x60\\x7b-\\xbf\\u2000-\\u206f \\t\\x0b\\f\\xa0\\ufeff\\n\\r\\u2028\\u2029\\u1680\\u180e\\u2000\\u2001\\u2002\\u2003\\u2004\\u2005\\u2006\\u2007\\u2008\\u2009\\u200a\\u202f\\u205f\\u3000]|[A-Z\\xc0-\\xd6\\xd8-\\xde](?:[a-z\\xdf-\\xf6\\xf8-\\xff]|[^\\ud800-\\udfff\\xac\\xb1\\xd7\\xf7\\x00-\\x2f\\x3a-\\x40\\x5b-\\x60\\x7b-\\xbf\\u2000-\\u206f \\t\\x0b\\f\\xa0\\ufeff\\n\\r\\u2028\\u2029\\u1680\\u180e\\u2000\\u2001\\u2002\\u2003\\u2004\\u2005\\u2006\\u2007\\u2008\\u2009\\u200a\\u202f\\u205f\\u3000\\d+\\u2700-\\u27bfa-z\\xdf-\\xf6\\xf8-\\xffA-Z\\xc0-\\xd6\\xd8-\\xde])|$)|[A-Z\\xc0-\\xd6\\xd8-\\xde]?(?:[a-z\\xdf-\\xf6\\xf8-\\xff]|[^\\ud800-\\udfff\\xac\\xb1\\xd7\\xf7\\x00-\\x2f\\x3a-\\x40\\x5b-\\x60\\x7b-\\xbf\\u2000-\\u206f \\t\\x0b\\f\\xa0\\ufeff\\n\\r\\u2028\\u2029\\u1680\\u180e\\u2000\\u2001\\u2002\\u2003\\u2004\\u2005\\u2006\\u2007\\u2008\\u2009\\u200a\\u202f\\u205f\\u3000\\d+\\u2700-\\u27bfa-z\\xdf-\\xf6\\xf8-\\xffA-Z\\xc0-\\xd6\\xd8-\\xde])+(?:['\u2019](?:d|ll|m|re|s|t|ve))?|[A-Z\\xc0-\\xd6\\xd8-\\xde]+(?:['\u2019](?:D|LL|M|RE|S|T|VE))?|\\d*(?:(?:1ST|2ND|3RD|(?![123])\\dTH)\\b)|\\d*(?:(?:1st|2nd|3rd|(?![123])\\dth)\\b)|\\d+",On].join("|"),"g"),Bn=RegExp("[\\u200d\\ud800-\\udfff\\u0300-\\u036f\\ufe20-\\ufe2f\\u20d0-\\u20ff\\ufe0e\\ufe0f]"),Ln=/[a-z][A-Z]|[A-Z]{2,}[a-z]|[0-9][a-zA-Z]|[a-zA-Z][0-9]|[^a-zA-Z0-9 ]/,Un="Array Buffer DataView Date Error Float32Array Float64Array Function Int8Array Int16Array Int32Array Map Math Object Promise RegExp Set String Symbol TypeError Uint8Array Uint8ClampedArray Uint16Array Uint32Array WeakMap _ clearTimeout isFinite parseInt setTimeout".split(" "),Cn={}; -Cn["[object Float32Array]"]=Cn["[object Float64Array]"]=Cn["[object Int8Array]"]=Cn["[object Int16Array]"]=Cn["[object Int32Array]"]=Cn["[object Uint8Array]"]=Cn["[object Uint8ClampedArray]"]=Cn["[object Uint16Array]"]=Cn["[object Uint32Array]"]=true,Cn["[object Arguments]"]=Cn["[object Array]"]=Cn["[object ArrayBuffer]"]=Cn["[object Boolean]"]=Cn["[object DataView]"]=Cn["[object Date]"]=Cn["[object Error]"]=Cn["[object Function]"]=Cn["[object Map]"]=Cn["[object Number]"]=Cn["[object Object]"]=Cn["[object RegExp]"]=Cn["[object Set]"]=Cn["[object String]"]=Cn["[object WeakMap]"]=false; -var Dn={};Dn["[object Arguments]"]=Dn["[object Array]"]=Dn["[object ArrayBuffer]"]=Dn["[object DataView]"]=Dn["[object Boolean]"]=Dn["[object Date]"]=Dn["[object Float32Array]"]=Dn["[object Float64Array]"]=Dn["[object Int8Array]"]=Dn["[object Int16Array]"]=Dn["[object Int32Array]"]=Dn["[object Map]"]=Dn["[object Number]"]=Dn["[object Object]"]=Dn["[object RegExp]"]=Dn["[object Set]"]=Dn["[object String]"]=Dn["[object Symbol]"]=Dn["[object Uint8Array]"]=Dn["[object Uint8ClampedArray]"]=Dn["[object Uint16Array]"]=Dn["[object Uint32Array]"]=true, -Dn["[object Error]"]=Dn["[object Function]"]=Dn["[object WeakMap]"]=false;var Mn,Tn={"\\":"\\","'":"'","\n":"n","\r":"r","\u2028":"u2028","\u2029":"u2029"},$n=parseFloat,Fn=parseInt,Nn=typeof global=="object"&&global&&global.Object===Object&&global,Pn=typeof self=="object"&&self&&self.Object===Object&&self,Zn=Nn||Pn||Function("return this")(),qn=typeof exports=="object"&&exports&&!exports.nodeType&&exports,Vn=qn&&typeof module=="object"&&module&&!module.nodeType&&module,Kn=Vn&&Vn.exports===qn,Gn=Kn&&Nn.process; -n:{try{Mn=Gn&&Gn.binding&&Gn.binding("util");break n}catch(n){}Mn=void 0}var Hn=Mn&&Mn.isArrayBuffer,Jn=Mn&&Mn.isDate,Yn=Mn&&Mn.isMap,Qn=Mn&&Mn.isRegExp,Xn=Mn&&Mn.isSet,nt=Mn&&Mn.isTypedArray,tt=j("length"),rt=w({"\xc0":"A","\xc1":"A","\xc2":"A","\xc3":"A","\xc4":"A","\xc5":"A","\xe0":"a","\xe1":"a","\xe2":"a","\xe3":"a","\xe4":"a","\xe5":"a","\xc7":"C","\xe7":"c","\xd0":"D","\xf0":"d","\xc8":"E","\xc9":"E","\xca":"E","\xcb":"E","\xe8":"e","\xe9":"e","\xea":"e","\xeb":"e","\xcc":"I","\xcd":"I","\xce":"I", -"\xcf":"I","\xec":"i","\xed":"i","\xee":"i","\xef":"i","\xd1":"N","\xf1":"n","\xd2":"O","\xd3":"O","\xd4":"O","\xd5":"O","\xd6":"O","\xd8":"O","\xf2":"o","\xf3":"o","\xf4":"o","\xf5":"o","\xf6":"o","\xf8":"o","\xd9":"U","\xda":"U","\xdb":"U","\xdc":"U","\xf9":"u","\xfa":"u","\xfb":"u","\xfc":"u","\xdd":"Y","\xfd":"y","\xff":"y","\xc6":"Ae","\xe6":"ae","\xde":"Th","\xfe":"th","\xdf":"ss","\u0100":"A","\u0102":"A","\u0104":"A","\u0101":"a","\u0103":"a","\u0105":"a","\u0106":"C","\u0108":"C","\u010a":"C", -"\u010c":"C","\u0107":"c","\u0109":"c","\u010b":"c","\u010d":"c","\u010e":"D","\u0110":"D","\u010f":"d","\u0111":"d","\u0112":"E","\u0114":"E","\u0116":"E","\u0118":"E","\u011a":"E","\u0113":"e","\u0115":"e","\u0117":"e","\u0119":"e","\u011b":"e","\u011c":"G","\u011e":"G","\u0120":"G","\u0122":"G","\u011d":"g","\u011f":"g","\u0121":"g","\u0123":"g","\u0124":"H","\u0126":"H","\u0125":"h","\u0127":"h","\u0128":"I","\u012a":"I","\u012c":"I","\u012e":"I","\u0130":"I","\u0129":"i","\u012b":"i","\u012d":"i", -"\u012f":"i","\u0131":"i","\u0134":"J","\u0135":"j","\u0136":"K","\u0137":"k","\u0138":"k","\u0139":"L","\u013b":"L","\u013d":"L","\u013f":"L","\u0141":"L","\u013a":"l","\u013c":"l","\u013e":"l","\u0140":"l","\u0142":"l","\u0143":"N","\u0145":"N","\u0147":"N","\u014a":"N","\u0144":"n","\u0146":"n","\u0148":"n","\u014b":"n","\u014c":"O","\u014e":"O","\u0150":"O","\u014d":"o","\u014f":"o","\u0151":"o","\u0154":"R","\u0156":"R","\u0158":"R","\u0155":"r","\u0157":"r","\u0159":"r","\u015a":"S","\u015c":"S", -"\u015e":"S","\u0160":"S","\u015b":"s","\u015d":"s","\u015f":"s","\u0161":"s","\u0162":"T","\u0164":"T","\u0166":"T","\u0163":"t","\u0165":"t","\u0167":"t","\u0168":"U","\u016a":"U","\u016c":"U","\u016e":"U","\u0170":"U","\u0172":"U","\u0169":"u","\u016b":"u","\u016d":"u","\u016f":"u","\u0171":"u","\u0173":"u","\u0174":"W","\u0175":"w","\u0176":"Y","\u0177":"y","\u0178":"Y","\u0179":"Z","\u017b":"Z","\u017d":"Z","\u017a":"z","\u017c":"z","\u017e":"z","\u0132":"IJ","\u0133":"ij","\u0152":"Oe","\u0153":"oe", -"\u0149":"'n","\u017f":"s"}),et=w({"&":"&","<":"<",">":">",'"':""","'":"'"}),ut=w({"&":"&","<":"<",">":">",""":'"',"'":"'"}),it=function w(En){function On(n){if(xu(n)&&!af(n)&&!(n instanceof Mn)){if(n instanceof zn)return n;if(ci.call(n,"__wrapped__"))return Pe(n)}return new zn(n)}function Sn(){}function zn(n,t){this.__wrapped__=n,this.__actions__=[],this.__chain__=!!t,this.__index__=0,this.__values__=F}function Mn(n){this.__wrapped__=n,this.__actions__=[],this.__dir__=1, -this.__filtered__=false,this.__iteratees__=[],this.__takeCount__=4294967295,this.__views__=[]}function Tn(n){var t=-1,r=null==n?0:n.length;for(this.clear();++t=t?n:t)),n}function dt(n,t,r,e,i,o){var f,c=1&t,a=2&t,l=4&t;if(r&&(f=i?r(n,e,i,o):r(n)),f!==F)return f;if(!bu(n))return n;if(e=af(n)){if(f=Ee(n),!c)return Mr(n,f)}else{var s=yo(n),h="[object Function]"==s||"[object GeneratorFunction]"==s;if(sf(n))return Wr(n,c);if("[object Object]"==s||"[object Arguments]"==s||h&&!i){if(f=a||h?{}:Oe(n),!c)return a?Fr(n,pt(f,n)):$r(n,ht(f,n))}else{if(!Dn[s])return i?n:{};f=Se(n,s,dt,c)}}if(o||(o=new Vn), -i=o.get(n))return i;o.set(n,f);var a=l?a?ye:de:a?Uu:Lu,p=e?F:a(n);return u(p||n,function(e,u){p&&(u=e,e=n[u]),at(f,u,dt(e,t,r,u,n,o))}),f}function yt(n){var t=Lu(n);return function(r){return bt(r,n,t)}}function bt(n,t,r){var e=r.length;if(null==n)return!e;for(n=ni(n);e--;){var u=r[e],i=t[u],o=n[u];if(o===F&&!(u in n)||!i(o))return false}return true}function xt(n,t,r){if(typeof n!="function")throw new ei("Expected a function");return jo(function(){n.apply(F,r)},t)}function jt(n,t,r,e){var u=-1,i=c,o=true,f=n.length,s=[],h=t.length; -if(!f)return s;r&&(t=l(t,S(r))),e?(i=a,o=false):200<=t.length&&(i=R,o=false,t=new qn(t));n:for(;++ut}function Bt(n,t){return null!=n&&ci.call(n,t)}function Lt(n,t){return null!=n&&t in ni(n)}function Ut(n,t,r){for(var e=r?a:c,u=n[0].length,i=n.length,o=i,f=Hu(i),s=1/0,h=[];o--;){var p=n[o];o&&t&&(p=l(p,S(t))),s=Mi(p.length,s),f[o]=!r&&(t||120<=u&&120<=p.length)?new qn(o&&p):F}var p=n[0],_=-1,v=f[0];n:for(;++_t.length?n:It(n,vr(t,0,-1)),t=null==n?n:n[$e(Ge(t))],null==t?F:r(t,n,e)}function Mt(n){return xu(n)&&"[object Arguments]"==zt(n)}function Tt(n){return xu(n)&&"[object ArrayBuffer]"==zt(n)}function $t(n){return xu(n)&&"[object Date]"==zt(n)}function Ft(n,t,r,e,u){if(n===t)t=true;else if(null==n||null==t||!xu(n)&&!xu(t))t=n!==n&&t!==t;else n:{ -var i=af(n),o=af(t),f=i?"[object Array]":yo(n),c=o?"[object Array]":yo(t),f="[object Arguments]"==f?"[object Object]":f,c="[object Arguments]"==c?"[object Object]":c,a="[object Object]"==f,o="[object Object]"==c;if((c=f==c)&&sf(n)){if(!sf(t)){t=false;break n}i=true,a=false}if(c&&!a)u||(u=new Vn),t=i||gf(n)?_e(n,t,r,e,Ft,u):ve(n,t,f,r,e,Ft,u);else{if(!(1&r)&&(i=a&&ci.call(n,"__wrapped__"),f=o&&ci.call(t,"__wrapped__"),i||f)){n=i?n.value():n,t=f?t.value():t,u||(u=new Vn),t=Ft(n,t,r,e,u);break n}if(c)t:if(u||(u=new Vn), -i=1&r,f=de(n),o=f.length,c=de(t).length,o==c||i){for(a=o;a--;){var l=f[a];if(!(i?l in t:ci.call(t,l))){t=false;break t}}if((c=u.get(n))&&u.get(t))t=c==t;else{c=true,u.set(n,t),u.set(t,n);for(var s=i;++at?r:0,Re(t,r)?n[t]:F}function rr(n,t,r){var e=-1;return t=l(t.length?t:[Nu],S(je())),n=Yt(n,function(n){return{a:l(t,function(t){return t(n)}),b:++e,c:n}}),A(n,function(n,t){var e;n:{e=-1;for(var u=n.a,i=t.a,o=u.length,f=r.length;++e=f?c:c*("desc"==r[e]?-1:1); -break n}}e=n.b-t.b}return e})}function er(n,t){return ur(n,t,function(t,r){return Bu(n,r)})}function ur(n,t,r){for(var e=-1,u=t.length,i={};++et||9007199254740991t&&(t=-t>u?0:u+t),r=r>u?u:r,0>r&&(r+=u),u=t>r?0:r-t>>>0,t>>>=0,r=Hu(u);++e=u){for(;e>>1,o=n[i];null!==o&&!Au(o)&&(r?o<=t:ot.length?n:It(n,vr(t,0,-1)), -null==n||delete n[$e(Ge(t))]}function Ar(n,t,r,e){for(var u=n.length,i=e?u:-1;(e?i--:++ie)return e?wr(n[0]):[];for(var u=-1,i=Hu(e);++u=e?n:vr(n,t,r)}function Wr(n,t){if(t)return n.slice();var r=n.length,r=yi?yi(r):new n.constructor(r);return n.copy(r),r}function Br(n){var t=new n.constructor(n.byteLength);return new di(t).set(new di(n)),t}function Lr(n,t){return new n.constructor(t?Br(n.buffer):n.buffer,n.byteOffset,n.length)}function Ur(n,t){ -if(n!==t){var r=n!==F,e=null===n,u=n===n,i=Au(n),o=t!==F,f=null===t,c=t===t,a=Au(t);if(!f&&!a&&!i&&n>t||i&&o&&c&&!f&&!a||e&&o&&c||!r&&c||!u)return 1;if(!e&&!i&&!a&&nu?F:i,u=1),t=ni(t);++eo&&f[0]!==a&&f[o-1]!==a?[]:C(f,a),o-=c.length,or?r?ar(t,n):t:(r=ar(t,Ri(n/T(t))),Bn.test(t)?zr($(r),0,n).join(""):r.slice(0,n))}function ue(n,t,e,u){function i(){for(var t=-1,c=arguments.length,a=-1,l=u.length,s=Hu(l+c),h=this&&this!==Zn&&this instanceof i?f:n;++at||e)&&(1&n&&(i[2]=h[2],t|=1&r?0:4),(r=h[3])&&(e=i[3],i[3]=e?Cr(e,r,h[4]):r,i[4]=e?C(i[3],"__lodash_placeholder__"):h[4]),(r=h[5])&&(e=i[5],i[5]=e?Dr(e,r,h[6]):r,i[6]=e?C(i[5],"__lodash_placeholder__"):h[6]),(r=h[7])&&(i[7]=r),128&n&&(i[8]=null==i[8]?h[8]:Mi(i[8],h[8])),null==i[9]&&(i[9]=h[9]),i[0]=h[0],i[1]=t),n=i[0],t=i[1], -r=i[2],e=i[3],u=i[4],f=i[9]=i[9]===F?c?0:n.length:Di(i[9]-a,0),!f&&24&t&&(t&=-25),De((h?lo:xo)(t&&1!=t?8==t||16==t?Jr(n,t,f):32!=t&&33!=t||u.length?Xr.apply(F,i):ue(n,t,r,e):Vr(n,t,r),i),n,t)}function se(n,t,r,e){return n===F||hu(n,ii[r])&&!ci.call(e,r)?t:n}function he(n,t,r,e,u,i){return bu(n)&&bu(t)&&(i.set(t,n),nr(n,t,F,he,i),i.delete(t)),n}function pe(n){return wu(n)?F:n}function _e(n,t,r,e,u,i){var o=1&r,f=n.length,c=t.length;if(f!=c&&!(o&&c>f))return false;if((c=i.get(n))&&i.get(t))return c==t;var c=-1,a=true,l=2&r?new qn:F; -for(i.set(n,t),i.set(t,n);++cr&&(r=Di(e+r,0)),g(n,je(t,3),r)):-1}function qe(n,t,r){var e=null==n?0:n.length;if(!e)return-1;var u=e-1;return r!==F&&(u=Ou(r),u=0>r?Di(e+u,0):Mi(u,e-1)), -g(n,je(t,3),u,true)}function Ve(n){return(null==n?0:n.length)?kt(n,1):[]}function Ke(n){return n&&n.length?n[0]:F}function Ge(n){var t=null==n?0:n.length;return t?n[t-1]:F}function He(n,t){return n&&n.length&&t&&t.length?or(n,t):n}function Je(n){return null==n?n:Ni.call(n)}function Ye(n){if(!n||!n.length)return[];var t=0;return n=f(n,function(n){if(_u(n))return t=Di(n.length,t),true}),E(t,function(t){return l(n,j(t))})}function Qe(n,t){if(!n||!n.length)return[];var e=Ye(n);return null==t?e:l(e,function(n){ -return r(t,F,n)})}function Xe(n){return n=On(n),n.__chain__=true,n}function nu(n,t){return t(n)}function tu(){return this}function ru(n,t){return(af(n)?u:oo)(n,je(t,3))}function eu(n,t){return(af(n)?i:fo)(n,je(t,3))}function uu(n,t){return(af(n)?l:Yt)(n,je(t,3))}function iu(n,t,r){return t=r?F:t,t=n&&null==t?n.length:t,le(n,128,F,F,F,F,t)}function ou(n,t){var r;if(typeof t!="function")throw new ei("Expected a function");return n=Ou(n),function(){return 0<--n&&(r=t.apply(this,arguments)),1>=n&&(t=F), -r}}function fu(n,t,r){return t=r?F:t,n=le(n,8,F,F,F,F,F,t),n.placeholder=fu.placeholder,n}function cu(n,t,r){return t=r?F:t,n=le(n,16,F,F,F,F,F,t),n.placeholder=cu.placeholder,n}function au(n,t,r){function e(t){var r=c,e=a;return c=a=F,_=t,s=n.apply(e,r)}function u(n){var r=n-p;return n-=_,p===F||r>=t||0>r||g&&n>=l}function i(){var n=Jo();if(u(n))return o(n);var r,e=jo;r=n-_,n=t-(n-p),r=g?Mi(n,l-r):n,h=e(i,r)}function o(n){return h=F,d&&c?e(n):(c=a=F,s)}function f(){var n=Jo(),r=u(n);if(c=arguments, -a=this,p=n,r){if(h===F)return _=n=p,h=jo(i,t),v?e(n):s;if(g)return h=jo(i,t),e(p)}return h===F&&(h=jo(i,t)),s}var c,a,l,s,h,p,_=0,v=false,g=false,d=true;if(typeof n!="function")throw new ei("Expected a function");return t=Iu(t)||0,bu(r)&&(v=!!r.leading,l=(g="maxWait"in r)?Di(Iu(r.maxWait)||0,t):l,d="trailing"in r?!!r.trailing:d),f.cancel=function(){h!==F&&ho(h),_=0,c=p=a=h=F},f.flush=function(){return h===F?s:o(Jo())},f}function lu(n,t){function r(){var e=arguments,u=t?t.apply(this,e):e[0],i=r.cache;return i.has(u)?i.get(u):(e=n.apply(this,e), -r.cache=i.set(u,e)||i,e)}if(typeof n!="function"||null!=t&&typeof t!="function")throw new ei("Expected a function");return r.cache=new(lu.Cache||Pn),r}function su(n){if(typeof n!="function")throw new ei("Expected a function");return function(){var t=arguments;switch(t.length){case 0:return!n.call(this);case 1:return!n.call(this,t[0]);case 2:return!n.call(this,t[0],t[1]);case 3:return!n.call(this,t[0],t[1],t[2])}return!n.apply(this,t)}}function hu(n,t){return n===t||n!==n&&t!==t}function pu(n){return null!=n&&yu(n.length)&&!gu(n); -}function _u(n){return xu(n)&&pu(n)}function vu(n){if(!xu(n))return false;var t=zt(n);return"[object Error]"==t||"[object DOMException]"==t||typeof n.message=="string"&&typeof n.name=="string"&&!wu(n)}function gu(n){return!!bu(n)&&(n=zt(n),"[object Function]"==n||"[object GeneratorFunction]"==n||"[object AsyncFunction]"==n||"[object Proxy]"==n)}function du(n){return typeof n=="number"&&n==Ou(n)}function yu(n){return typeof n=="number"&&-1=n}function bu(n){var t=typeof n;return null!=n&&("object"==t||"function"==t); -}function xu(n){return null!=n&&typeof n=="object"}function ju(n){return typeof n=="number"||xu(n)&&"[object Number]"==zt(n)}function wu(n){return!(!xu(n)||"[object Object]"!=zt(n))&&(n=bi(n),null===n||(n=ci.call(n,"constructor")&&n.constructor,typeof n=="function"&&n instanceof n&&fi.call(n)==hi))}function mu(n){return typeof n=="string"||!af(n)&&xu(n)&&"[object String]"==zt(n)}function Au(n){return typeof n=="symbol"||xu(n)&&"[object Symbol]"==zt(n)}function ku(n){if(!n)return[];if(pu(n))return mu(n)?$(n):Mr(n); -if(Ai&&n[Ai]){n=n[Ai]();for(var t,r=[];!(t=n.next()).done;)r.push(t.value);return r}return t=yo(n),("[object Map]"==t?L:"[object Set]"==t?D:Du)(n)}function Eu(n){return n?(n=Iu(n),n===N||n===-N?1.7976931348623157e308*(0>n?-1:1):n===n?n:0):0===n?n:0}function Ou(n){n=Eu(n);var t=n%1;return n===n?t?n-t:n:0}function Su(n){return n?gt(Ou(n),0,4294967295):0}function Iu(n){if(typeof n=="number")return n;if(Au(n))return P;if(bu(n)&&(n=typeof n.valueOf=="function"?n.valueOf():n,n=bu(n)?n+"":n),typeof n!="string")return 0===n?n:+n; -n=n.replace(cn,"");var t=bn.test(n);return t||jn.test(n)?Fn(n.slice(2),t?2:8):yn.test(n)?P:+n}function Ru(n){return Tr(n,Uu(n))}function zu(n){return null==n?"":jr(n)}function Wu(n,t,r){return n=null==n?F:It(n,t),n===F?r:n}function Bu(n,t){return null!=n&&ke(n,t,Lt)}function Lu(n){return pu(n)?Gn(n):Ht(n)}function Uu(n){if(pu(n))n=Gn(n,true);else if(bu(n)){var t,r=Le(n),e=[];for(t in n)("constructor"!=t||!r&&ci.call(n,t))&&e.push(t);n=e}else{if(t=[],null!=n)for(r in ni(n))t.push(r);n=t}return n}function Cu(n,t){ -if(null==n)return{};var r=l(ye(n),function(n){return[n]});return t=je(t),ur(n,r,function(n,r){return t(n,r[0])})}function Du(n){return null==n?[]:I(n,Lu(n))}function Mu(n){return Nf(zu(n).toLowerCase())}function Tu(n){return(n=zu(n))&&n.replace(mn,rt).replace(Rn,"")}function $u(n,t,r){return n=zu(n),t=r?F:t,t===F?Ln.test(n)?n.match(Wn)||[]:n.match(_n)||[]:n.match(t)||[]}function Fu(n){return function(){return n}}function Nu(n){return n}function Pu(n){return Gt(typeof n=="function"?n:dt(n,1))}function Zu(n,t,r){ -var e=Lu(t),i=St(t,e);null!=r||bu(t)&&(i.length||!e.length)||(r=t,t=n,n=this,i=St(t,Lu(t)));var o=!(bu(r)&&"chain"in r&&!r.chain),f=gu(n);return u(i,function(r){var e=t[r];n[r]=e,f&&(n.prototype[r]=function(){var t=this.__chain__;if(o||t){var r=n(this.__wrapped__);return(r.__actions__=Mr(this.__actions__)).push({func:e,args:arguments,thisArg:n}),r.__chain__=t,r}return e.apply(n,s([this.value()],arguments))})}),n}function qu(){}function Vu(n){return We(n)?j($e(n)):ir(n)}function Ku(){return[]}function Gu(){ -return false}En=null==En?Zn:it.defaults(Zn.Object(),En,it.pick(Zn,Un));var Hu=En.Array,Ju=En.Date,Yu=En.Error,Qu=En.Function,Xu=En.Math,ni=En.Object,ti=En.RegExp,ri=En.String,ei=En.TypeError,ui=Hu.prototype,ii=ni.prototype,oi=En["__core-js_shared__"],fi=Qu.prototype.toString,ci=ii.hasOwnProperty,ai=0,li=function(){var n=/[^.]+$/.exec(oi&&oi.keys&&oi.keys.IE_PROTO||"");return n?"Symbol(src)_1."+n:""}(),si=ii.toString,hi=fi.call(ni),pi=Zn._,_i=ti("^"+fi.call(ci).replace(on,"\\$&").replace(/hasOwnProperty|(function).*?(?=\\\()| for .+?(?=\\\])/g,"$1.*?")+"$"),vi=Kn?En.Buffer:F,gi=En.Symbol,di=En.Uint8Array,yi=vi?vi.f:F,bi=U(ni.getPrototypeOf,ni),xi=ni.create,ji=ii.propertyIsEnumerable,wi=ui.splice,mi=gi?gi.isConcatSpreadable:F,Ai=gi?gi.iterator:F,ki=gi?gi.toStringTag:F,Ei=function(){ -try{var n=Ae(ni,"defineProperty");return n({},"",{}),n}catch(n){}}(),Oi=En.clearTimeout!==Zn.clearTimeout&&En.clearTimeout,Si=Ju&&Ju.now!==Zn.Date.now&&Ju.now,Ii=En.setTimeout!==Zn.setTimeout&&En.setTimeout,Ri=Xu.ceil,zi=Xu.floor,Wi=ni.getOwnPropertySymbols,Bi=vi?vi.isBuffer:F,Li=En.isFinite,Ui=ui.join,Ci=U(ni.keys,ni),Di=Xu.max,Mi=Xu.min,Ti=Ju.now,$i=En.parseInt,Fi=Xu.random,Ni=ui.reverse,Pi=Ae(En,"DataView"),Zi=Ae(En,"Map"),qi=Ae(En,"Promise"),Vi=Ae(En,"Set"),Ki=Ae(En,"WeakMap"),Gi=Ae(ni,"create"),Hi=Ki&&new Ki,Ji={},Yi=Fe(Pi),Qi=Fe(Zi),Xi=Fe(qi),no=Fe(Vi),to=Fe(Ki),ro=gi?gi.prototype:F,eo=ro?ro.valueOf:F,uo=ro?ro.toString:F,io=function(){ -function n(){}return function(t){return bu(t)?xi?xi(t):(n.prototype=t,t=new n,n.prototype=F,t):{}}}();On.templateSettings={escape:Q,evaluate:X,interpolate:nn,variable:"",imports:{_:On}},On.prototype=Sn.prototype,On.prototype.constructor=On,zn.prototype=io(Sn.prototype),zn.prototype.constructor=zn,Mn.prototype=io(Sn.prototype),Mn.prototype.constructor=Mn,Tn.prototype.clear=function(){this.__data__=Gi?Gi(null):{},this.size=0},Tn.prototype.delete=function(n){return n=this.has(n)&&delete this.__data__[n], -this.size-=n?1:0,n},Tn.prototype.get=function(n){var t=this.__data__;return Gi?(n=t[n],"__lodash_hash_undefined__"===n?F:n):ci.call(t,n)?t[n]:F},Tn.prototype.has=function(n){var t=this.__data__;return Gi?t[n]!==F:ci.call(t,n)},Tn.prototype.set=function(n,t){var r=this.__data__;return this.size+=this.has(n)?0:1,r[n]=Gi&&t===F?"__lodash_hash_undefined__":t,this},Nn.prototype.clear=function(){this.__data__=[],this.size=0},Nn.prototype.delete=function(n){var t=this.__data__;return n=lt(t,n),!(0>n)&&(n==t.length-1?t.pop():wi.call(t,n,1), ---this.size,true)},Nn.prototype.get=function(n){var t=this.__data__;return n=lt(t,n),0>n?F:t[n][1]},Nn.prototype.has=function(n){return-1e?(++this.size,r.push([n,t])):r[e][1]=t,this},Pn.prototype.clear=function(){this.size=0,this.__data__={hash:new Tn,map:new(Zi||Nn),string:new Tn}},Pn.prototype.delete=function(n){return n=we(this,n).delete(n),this.size-=n?1:0,n},Pn.prototype.get=function(n){return we(this,n).get(n); -},Pn.prototype.has=function(n){return we(this,n).has(n)},Pn.prototype.set=function(n,t){var r=we(this,n),e=r.size;return r.set(n,t),this.size+=r.size==e?0:1,this},qn.prototype.add=qn.prototype.push=function(n){return this.__data__.set(n,"__lodash_hash_undefined__"),this},qn.prototype.has=function(n){return this.__data__.has(n)},Vn.prototype.clear=function(){this.__data__=new Nn,this.size=0},Vn.prototype.delete=function(n){var t=this.__data__;return n=t.delete(n),this.size=t.size,n},Vn.prototype.get=function(n){ -return this.__data__.get(n)},Vn.prototype.has=function(n){return this.__data__.has(n)},Vn.prototype.set=function(n,t){var r=this.__data__;if(r instanceof Nn){var e=r.__data__;if(!Zi||199>e.length)return e.push([n,t]),this.size=++r.size,this;r=this.__data__=new Pn(e)}return r.set(n,t),this.size=r.size,this};var oo=Zr(Et),fo=Zr(Ot,true),co=qr(),ao=qr(true),lo=Hi?function(n,t){return Hi.set(n,t),n}:Nu,so=Ei?function(n,t){return Ei(n,"toString",{configurable:true,enumerable:false,value:Fu(t),writable:true})}:Nu,ho=Oi||function(n){ -return Zn.clearTimeout(n)},po=Vi&&1/D(new Vi([,-0]))[1]==N?function(n){return new Vi(n)}:qu,_o=Hi?function(n){return Hi.get(n)}:qu,vo=Wi?function(n){return null==n?[]:(n=ni(n),f(Wi(n),function(t){return ji.call(n,t)}))}:Ku,go=Wi?function(n){for(var t=[];n;)s(t,vo(n)),n=bi(n);return t}:Ku,yo=zt;(Pi&&"[object DataView]"!=yo(new Pi(new ArrayBuffer(1)))||Zi&&"[object Map]"!=yo(new Zi)||qi&&"[object Promise]"!=yo(qi.resolve())||Vi&&"[object Set]"!=yo(new Vi)||Ki&&"[object WeakMap]"!=yo(new Ki))&&(yo=function(n){ -var t=zt(n);if(n=(n="[object Object]"==t?n.constructor:F)?Fe(n):"")switch(n){case Yi:return"[object DataView]";case Qi:return"[object Map]";case Xi:return"[object Promise]";case no:return"[object Set]";case to:return"[object WeakMap]"}return t});var bo=oi?gu:Gu,xo=Me(lo),jo=Ii||function(n,t){return Zn.setTimeout(n,t)},wo=Me(so),mo=function(n){n=lu(n,function(n){return 500===t.size&&t.clear(),n});var t=n.cache;return n}(function(n){var t=[];return en.test(n)&&t.push(""),n.replace(un,function(n,r,e,u){ -t.push(e?u.replace(vn,"$1"):r||n)}),t}),Ao=lr(function(n,t){return _u(n)?jt(n,kt(t,1,_u,true)):[]}),ko=lr(function(n,t){var r=Ge(t);return _u(r)&&(r=F),_u(n)?jt(n,kt(t,1,_u,true),je(r,2)):[]}),Eo=lr(function(n,t){var r=Ge(t);return _u(r)&&(r=F),_u(n)?jt(n,kt(t,1,_u,true),F,r):[]}),Oo=lr(function(n){var t=l(n,Sr);return t.length&&t[0]===n[0]?Ut(t):[]}),So=lr(function(n){var t=Ge(n),r=l(n,Sr);return t===Ge(r)?t=F:r.pop(),r.length&&r[0]===n[0]?Ut(r,je(t,2)):[]}),Io=lr(function(n){var t=Ge(n),r=l(n,Sr);return(t=typeof t=="function"?t:F)&&r.pop(), -r.length&&r[0]===n[0]?Ut(r,F,t):[]}),Ro=lr(He),zo=ge(function(n,t){var r=null==n?0:n.length,e=vt(n,t);return fr(n,l(t,function(n){return Re(n,r)?+n:n}).sort(Ur)),e}),Wo=lr(function(n){return wr(kt(n,1,_u,true))}),Bo=lr(function(n){var t=Ge(n);return _u(t)&&(t=F),wr(kt(n,1,_u,true),je(t,2))}),Lo=lr(function(n){var t=Ge(n),t=typeof t=="function"?t:F;return wr(kt(n,1,_u,true),F,t)}),Uo=lr(function(n,t){return _u(n)?jt(n,t):[]}),Co=lr(function(n){return Er(f(n,_u))}),Do=lr(function(n){var t=Ge(n);return _u(t)&&(t=F), -Er(f(n,_u),je(t,2))}),Mo=lr(function(n){var t=Ge(n),t=typeof t=="function"?t:F;return Er(f(n,_u),F,t)}),To=lr(Ye),$o=lr(function(n){var t=n.length,t=1=t}),cf=Mt(function(){return arguments}())?Mt:function(n){return xu(n)&&ci.call(n,"callee")&&!ji.call(n,"callee")},af=Hu.isArray,lf=Hn?S(Hn):Tt,sf=Bi||Gu,hf=Jn?S(Jn):$t,pf=Yn?S(Yn):Nt,_f=Qn?S(Qn):qt,vf=Xn?S(Xn):Vt,gf=nt?S(nt):Kt,df=oe(Jt),yf=oe(function(n,t){return n<=t}),bf=Pr(function(n,t){ -if(Le(t)||pu(t))Tr(t,Lu(t),n);else for(var r in t)ci.call(t,r)&&at(n,r,t[r])}),xf=Pr(function(n,t){Tr(t,Uu(t),n)}),jf=Pr(function(n,t,r,e){Tr(t,Uu(t),n,e)}),wf=Pr(function(n,t,r,e){Tr(t,Lu(t),n,e)}),mf=ge(vt),Af=lr(function(n){return n.push(F,se),r(jf,F,n)}),kf=lr(function(n){return n.push(F,he),r(Rf,F,n)}),Ef=ne(function(n,t,r){n[t]=r},Fu(Nu)),Of=ne(function(n,t,r){ci.call(n,t)?n[t].push(r):n[t]=[r]},je),Sf=lr(Dt),If=Pr(function(n,t,r){nr(n,t,r)}),Rf=Pr(function(n,t,r,e){nr(n,t,r,e)}),zf=ge(function(n,t){ -var r={};if(null==n)return r;var e=false;t=l(t,function(t){return t=Rr(t,n),e||(e=1--n)return t.apply(this,arguments)}},On.ary=iu,On.assign=bf,On.assignIn=xf,On.assignInWith=jf,On.assignWith=wf,On.at=mf,On.before=ou,On.bind=Yo,On.bindAll=Zf,On.bindKey=Qo,On.castArray=function(){if(!arguments.length)return[];var n=arguments[0];return af(n)?n:[n]}, -On.chain=Xe,On.chunk=function(n,t,r){if(t=(r?ze(n,t,r):t===F)?1:Di(Ou(t),0),r=null==n?0:n.length,!r||1>t)return[];for(var e=0,u=0,i=Hu(Ri(r/t));et?0:t,e)):[]},On.dropRight=function(n,t,r){var e=null==n?0:n.length;return e?(t=r||t===F?1:Ou(t),t=e-t,vr(n,0,0>t?0:t)):[]},On.dropRightWhile=function(n,t){return n&&n.length?Ar(n,je(t,3),true,true):[]},On.dropWhile=function(n,t){return n&&n.length?Ar(n,je(t,3),true):[]},On.fill=function(n,t,r,e){var u=null==n?0:n.length;if(!u)return[];for(r&&typeof r!="number"&&ze(n,t,r)&&(r=0,e=u),u=n.length,r=Ou(r),0>r&&(r=-r>u?0:u+r),e=e===F||e>u?u:Ou(e),0>e&&(e+=u),e=r>e?0:Su(e);r>>0,r?(n=zu(n))&&(typeof t=="string"||null!=t&&!_f(t))&&(t=jr(t), -!t&&Bn.test(n))?zr($(n),0,r):n.split(t,r):[]},On.spread=function(n,t){if(typeof n!="function")throw new ei("Expected a function");return t=null==t?0:Di(Ou(t),0),lr(function(e){var u=e[t];return e=zr(e,0,t),u&&s(e,u),r(n,this,e)})},On.tail=function(n){var t=null==n?0:n.length;return t?vr(n,1,t):[]},On.take=function(n,t,r){return n&&n.length?(t=r||t===F?1:Ou(t),vr(n,0,0>t?0:t)):[]},On.takeRight=function(n,t,r){var e=null==n?0:n.length;return e?(t=r||t===F?1:Ou(t),t=e-t,vr(n,0>t?0:t,e)):[]},On.takeRightWhile=function(n,t){ -return n&&n.length?Ar(n,je(t,3),false,true):[]},On.takeWhile=function(n,t){return n&&n.length?Ar(n,je(t,3)):[]},On.tap=function(n,t){return t(n),n},On.throttle=function(n,t,r){var e=true,u=true;if(typeof n!="function")throw new ei("Expected a function");return bu(r)&&(e="leading"in r?!!r.leading:e,u="trailing"in r?!!r.trailing:u),au(n,t,{leading:e,maxWait:t,trailing:u})},On.thru=nu,On.toArray=ku,On.toPairs=Bf,On.toPairsIn=Lf,On.toPath=function(n){return af(n)?l(n,$e):Au(n)?[n]:Mr(mo(zu(n)))},On.toPlainObject=Ru, -On.transform=function(n,t,r){var e=af(n),i=e||sf(n)||gf(n);if(t=je(t,4),null==r){var o=n&&n.constructor;r=i?e?new o:[]:bu(n)&&gu(o)?io(bi(n)):{}}return(i?u:Et)(n,function(n,e,u){return t(r,n,e,u)}),r},On.unary=function(n){return iu(n,1)},On.union=Wo,On.unionBy=Bo,On.unionWith=Lo,On.uniq=function(n){return n&&n.length?wr(n):[]},On.uniqBy=function(n,t){return n&&n.length?wr(n,je(t,2)):[]},On.uniqWith=function(n,t){return t=typeof t=="function"?t:F,n&&n.length?wr(n,F,t):[]},On.unset=function(n,t){return null==n||mr(n,t); -},On.unzip=Ye,On.unzipWith=Qe,On.update=function(n,t,r){return null==n?n:pr(n,t,Ir(r)(It(n,t)),void 0)},On.updateWith=function(n,t,r,e){return e=typeof e=="function"?e:F,null!=n&&(n=pr(n,t,Ir(r)(It(n,t)),e)),n},On.values=Du,On.valuesIn=function(n){return null==n?[]:I(n,Uu(n))},On.without=Uo,On.words=$u,On.wrap=function(n,t){return rf(Ir(t),n)},On.xor=Co,On.xorBy=Do,On.xorWith=Mo,On.zip=To,On.zipObject=function(n,t){return Or(n||[],t||[],at)},On.zipObjectDeep=function(n,t){return Or(n||[],t||[],pr); -},On.zipWith=$o,On.entries=Bf,On.entriesIn=Lf,On.extend=xf,On.extendWith=jf,Zu(On,On),On.add=nc,On.attempt=Pf,On.camelCase=Uf,On.capitalize=Mu,On.ceil=tc,On.clamp=function(n,t,r){return r===F&&(r=t,t=F),r!==F&&(r=Iu(r),r=r===r?r:0),t!==F&&(t=Iu(t),t=t===t?t:0),gt(Iu(n),t,r)},On.clone=function(n){return dt(n,4)},On.cloneDeep=function(n){return dt(n,5)},On.cloneDeepWith=function(n,t){return t=typeof t=="function"?t:F,dt(n,5,t)},On.cloneWith=function(n,t){return t=typeof t=="function"?t:F,dt(n,4,t)}, -On.conformsTo=function(n,t){return null==t||bt(n,t,Lu(t))},On.deburr=Tu,On.defaultTo=function(n,t){return null==n||n!==n?t:n},On.divide=rc,On.endsWith=function(n,t,r){n=zu(n),t=jr(t);var e=n.length,e=r=r===F?e:gt(Ou(r),0,e);return r-=t.length,0<=r&&n.slice(r,e)==t},On.eq=hu,On.escape=function(n){return(n=zu(n))&&Y.test(n)?n.replace(H,et):n},On.escapeRegExp=function(n){return(n=zu(n))&&fn.test(n)?n.replace(on,"\\$&"):n},On.every=function(n,t,r){var e=af(n)?o:wt;return r&&ze(n,t,r)&&(t=F),e(n,je(t,3)); -},On.find=Po,On.findIndex=Ze,On.findKey=function(n,t){return v(n,je(t,3),Et)},On.findLast=Zo,On.findLastIndex=qe,On.findLastKey=function(n,t){return v(n,je(t,3),Ot)},On.floor=ec,On.forEach=ru,On.forEachRight=eu,On.forIn=function(n,t){return null==n?n:co(n,je(t,3),Uu)},On.forInRight=function(n,t){return null==n?n:ao(n,je(t,3),Uu)},On.forOwn=function(n,t){return n&&Et(n,je(t,3))},On.forOwnRight=function(n,t){return n&&Ot(n,je(t,3))},On.get=Wu,On.gt=of,On.gte=ff,On.has=function(n,t){return null!=n&&ke(n,t,Bt); -},On.hasIn=Bu,On.head=Ke,On.identity=Nu,On.includes=function(n,t,r,e){return n=pu(n)?n:Du(n),r=r&&!e?Ou(r):0,e=n.length,0>r&&(r=Di(e+r,0)),mu(n)?r<=e&&-1r&&(r=Di(e+r,0)),d(n,t,r)):-1},On.inRange=function(n,t,r){return t=Eu(t),r===F?(r=t,t=0):r=Eu(r),n=Iu(n),n>=Mi(t,r)&&n=n},On.isSet=vf,On.isString=mu,On.isSymbol=Au,On.isTypedArray=gf,On.isUndefined=function(n){return n===F},On.isWeakMap=function(n){return xu(n)&&"[object WeakMap]"==yo(n)},On.isWeakSet=function(n){return xu(n)&&"[object WeakSet]"==zt(n)},On.join=function(n,t){ -return null==n?"":Ui.call(n,t)},On.kebabCase=Cf,On.last=Ge,On.lastIndexOf=function(n,t,r){var e=null==n?0:n.length;if(!e)return-1;var u=e;if(r!==F&&(u=Ou(r),u=0>u?Di(e+u,0):Mi(u,e-1)),t===t){for(r=u+1;r--&&n[r]!==t;);n=r}else n=g(n,b,u,true);return n},On.lowerCase=Df,On.lowerFirst=Mf,On.lt=df,On.lte=yf,On.max=function(n){return n&&n.length?mt(n,Nu,Wt):F},On.maxBy=function(n,t){return n&&n.length?mt(n,je(t,2),Wt):F},On.mean=function(n){return x(n,Nu)},On.meanBy=function(n,t){return x(n,je(t,2))},On.min=function(n){ -return n&&n.length?mt(n,Nu,Jt):F},On.minBy=function(n,t){return n&&n.length?mt(n,je(t,2),Jt):F},On.stubArray=Ku,On.stubFalse=Gu,On.stubObject=function(){return{}},On.stubString=function(){return""},On.stubTrue=function(){return true},On.multiply=uc,On.nth=function(n,t){return n&&n.length?tr(n,Ou(t)):F},On.noConflict=function(){return Zn._===this&&(Zn._=pi),this},On.noop=qu,On.now=Jo,On.pad=function(n,t,r){n=zu(n);var e=(t=Ou(t))?T(n):0;return!t||e>=t?n:(t=(t-e)/2,ee(zi(t),r)+n+ee(Ri(t),r))},On.padEnd=function(n,t,r){ -n=zu(n);var e=(t=Ou(t))?T(n):0;return t&&et){var e=n;n=t,t=e}return r||n%1||t%1?(r=Fi(),Mi(n+r*(t-n+$n("1e-"+((r+"").length-1))),t)):cr(n,t); -},On.reduce=function(n,t,r){var e=af(n)?h:m,u=3>arguments.length;return e(n,je(t,4),r,u,oo)},On.reduceRight=function(n,t,r){var e=af(n)?p:m,u=3>arguments.length;return e(n,je(t,4),r,u,fo)},On.repeat=function(n,t,r){return t=(r?ze(n,t,r):t===F)?1:Ou(t),ar(zu(n),t)},On.replace=function(){var n=arguments,t=zu(n[0]);return 3>n.length?t:t.replace(n[1],n[2])},On.result=function(n,t,r){t=Rr(t,n);var e=-1,u=t.length;for(u||(u=1,n=F);++en||9007199254740991=i)return n;if(i=r-T(e),1>i)return e; -if(r=o?zr(o,0,i).join(""):n.slice(0,i),u===F)return r+e;if(o&&(i+=r.length-i),_f(u)){if(n.slice(i).search(u)){var f=r;for(u.global||(u=ti(u.source,zu(dn.exec(u))+"g")),u.lastIndex=0;o=u.exec(f);)var c=o.index;r=r.slice(0,c===F?i:c)}}else n.indexOf(jr(u),i)!=i&&(u=r.lastIndexOf(u),-1e.__dir__?"Right":"")}),e},Mn.prototype[n+"Right"]=function(t){ -return this.reverse()[n](t).reverse()}}),u(["filter","map","takeWhile"],function(n,t){var r=t+1,e=1==r||3==r;Mn.prototype[n]=function(n){var t=this.clone();return t.__iteratees__.push({iteratee:je(n,3),type:r}),t.__filtered__=t.__filtered__||e,t}}),u(["head","last"],function(n,t){var r="take"+(t?"Right":"");Mn.prototype[n]=function(){return this[r](1).value()[0]}}),u(["initial","tail"],function(n,t){var r="drop"+(t?"":"Right");Mn.prototype[n]=function(){return this.__filtered__?new Mn(this):this[r](1); -}}),Mn.prototype.compact=function(){return this.filter(Nu)},Mn.prototype.find=function(n){return this.filter(n).head()},Mn.prototype.findLast=function(n){return this.reverse().find(n)},Mn.prototype.invokeMap=lr(function(n,t){return typeof n=="function"?new Mn(this):this.map(function(r){return Dt(r,n,t)})}),Mn.prototype.reject=function(n){return this.filter(su(je(n)))},Mn.prototype.slice=function(n,t){n=Ou(n);var r=this;return r.__filtered__&&(0t)?new Mn(r):(0>n?r=r.takeRight(-n):n&&(r=r.drop(n)), -t!==F&&(t=Ou(t),r=0>t?r.dropRight(-t):r.take(t-n)),r)},Mn.prototype.takeRightWhile=function(n){return this.reverse().takeWhile(n).reverse()},Mn.prototype.toArray=function(){return this.take(4294967295)},Et(Mn.prototype,function(n,t){var r=/^(?:filter|find|map|reject)|While$/.test(t),e=/^(?:head|last)$/.test(t),u=On[e?"take"+("last"==t?"Right":""):t],i=e||/^find/.test(t);u&&(On.prototype[t]=function(){function t(n){return n=u.apply(On,s([n],f)),e&&h?n[0]:n}var o=this.__wrapped__,f=e?[1]:arguments,c=o instanceof Mn,a=f[0],l=c||af(o); -l&&r&&typeof a=="function"&&1!=a.length&&(c=l=false);var h=this.__chain__,p=!!this.__actions__.length,a=i&&!h,c=c&&!p;return!i&&l?(o=c?o:new Mn(this),o=n.apply(o,f),o.__actions__.push({func:nu,args:[t],thisArg:F}),new zn(o,h)):a&&c?n.apply(this,f):(o=this.thru(t),a?e?o.value()[0]:o.value():o)})}),u("pop push shift sort splice unshift".split(" "),function(n){var t=ui[n],r=/^(?:push|sort|unshift)$/.test(n)?"tap":"thru",e=/^(?:pop|shift)$/.test(n);On.prototype[n]=function(){var n=arguments;if(e&&!this.__chain__){ -var u=this.value();return t.apply(af(u)?u:[],n)}return this[r](function(r){return t.apply(af(r)?r:[],n)})}}),Et(Mn.prototype,function(n,t){var r=On[t];if(r){var e=r.name+"";(Ji[e]||(Ji[e]=[])).push({name:t,func:r})}}),Ji[Xr(F,2).name]=[{name:"wrapper",func:F}],Mn.prototype.clone=function(){var n=new Mn(this.__wrapped__);return n.__actions__=Mr(this.__actions__),n.__dir__=this.__dir__,n.__filtered__=this.__filtered__,n.__iteratees__=Mr(this.__iteratees__),n.__takeCount__=this.__takeCount__,n.__views__=Mr(this.__views__), -n},Mn.prototype.reverse=function(){if(this.__filtered__){var n=new Mn(this);n.__dir__=-1,n.__filtered__=true}else n=this.clone(),n.__dir__*=-1;return n},Mn.prototype.value=function(){var n,t=this.__wrapped__.value(),r=this.__dir__,e=af(t),u=0>r,i=e?t.length:0;n=i;for(var o=this.__views__,f=0,c=-1,a=o.length;++c=this.__values__.length;return{done:n,value:n?F:this.__values__[this.__index__++]}},On.prototype.plant=function(n){for(var t,r=this;r instanceof Sn;){var e=Pe(r);e.__index__=0,e.__values__=F,t?u.__wrapped__=e:t=e;var u=e,r=r.__wrapped__}return u.__wrapped__=n,t},On.prototype.reverse=function(){var n=this.__wrapped__;return n instanceof Mn?(this.__actions__.length&&(n=new Mn(this)),n=n.reverse(),n.__actions__.push({func:nu,args:[Je],thisArg:F}),new zn(n,this.__chain__)):this.thru(Je); -},On.prototype.toJSON=On.prototype.valueOf=On.prototype.value=function(){return kr(this.__wrapped__,this.__actions__)},On.prototype.first=On.prototype.head,Ai&&(On.prototype[Ai]=tu),On}();typeof define=="function"&&typeof define.amd=="object"&&define.amd?(Zn._=it, define(function(){return it})):Vn?((Vn.exports=it)._=it,qn._=it):Zn._=it}).call(this);!function(t,n){"object"==typeof exports&&"undefined"!=typeof module?n(exports):"function"==typeof define&&define.amd?define(["exports"],n):n(t.d3=t.d3||{})}(this,function(t){"use strict";function n(t){return function(n,e){return Mf(t(n),e)}}function e(t,n){return[t,n]}function r(t,n,e){var r=(n-t)/Math.max(0,e),i=Math.floor(Math.log(r)/Math.LN10),o=r/Math.pow(10,i);return i>=0?(o>=If?10:o>=Hf?5:o>=Bf?2:1)*Math.pow(10,i):-Math.pow(10,-i)/(o>=If?10:o>=Hf?5:o>=Bf?2:1)}function i(t,n,e){var r=Math.abs(n-t)/Math.max(0,e),i=Math.pow(10,Math.floor(Math.log(r)/Math.LN10)),o=r/i;return o>=If?i*=10:o>=Hf?i*=5:o>=Bf&&(i*=2),n=0&&(e=t.slice(r+1),t=t.slice(0,r)),t&&!n.hasOwnProperty(t))throw new Error("unknown type: "+t);return{type:t,name:e}})}function m(t,n){for(var e,r=0,i=t.length;r=0&&(n=t.slice(e+1),t=t.slice(0,e)),{type:t,name:n}})}function A(t){return function(){var n=this.__on;if(n){for(var e,r=0,i=-1,o=n.length;rn?1:t>=n?0:NaN}function U(t){return function(){this.removeAttribute(t)}}function O(t){return function(){this.removeAttributeNS(t.space,t.local)}}function F(t,n){return function(){this.setAttribute(t,n)}}function Y(t,n){return function(){this.setAttributeNS(t.space,t.local,n)}}function I(t,n){return function(){var e=n.apply(this,arguments);null==e?this.removeAttribute(t):this.setAttribute(t,e)}}function H(t,n){return function(){var e=n.apply(this,arguments);null==e?this.removeAttributeNS(t.space,t.local):this.setAttributeNS(t.space,t.local,e)}}function B(t){return function(){this.style.removeProperty(t)}}function j(t,n,e){return function(){this.style.setProperty(t,n,e)}}function X(t,n,e){return function(){var r=n.apply(this,arguments);null==r?this.style.removeProperty(t):this.style.setProperty(t,r,e)}}function W(t,n){return t.style.getPropertyValue(n)||Gl(t).getComputedStyle(t,null).getPropertyValue(n)}function V(t){return function(){delete this[t]}}function $(t,n){return function(){this[t]=n}}function Z(t,n){return function(){var e=n.apply(this,arguments);null==e?delete this[t]:this[t]=e}}function G(t){return t.trim().split(/^|\s+/)}function Q(t){return t.classList||new J(t)}function J(t){this._node=t,this._names=G(t.getAttribute("class")||"")}function K(t,n){for(var e=Q(t),r=-1,i=n.length;++r>8&15|n>>4&240,n>>4&15|240&n,(15&n)<<4|15&n,1)):(n=Mh.exec(t))?Et(parseInt(n[1],16)):(n=Th.exec(t))?new Rt(n[1],n[2],n[3],1):(n=Nh.exec(t))?new Rt(255*n[1]/100,255*n[2]/100,255*n[3]/100,1):(n=kh.exec(t))?Ct(n[1],n[2],n[3],n[4]):(n=Sh.exec(t))?Ct(255*n[1]/100,255*n[2]/100,255*n[3]/100,n[4]):(n=Ah.exec(t))?Lt(n[1],n[2]/100,n[3]/100,1):(n=Eh.exec(t))?Lt(n[1],n[2]/100,n[3]/100,n[4]):Ch.hasOwnProperty(t)?Et(Ch[t]):"transparent"===t?new Rt(NaN,NaN,NaN,0):null}function Et(t){return new Rt(t>>16&255,t>>8&255,255&t,1)}function Ct(t,n,e,r){return r<=0&&(t=n=e=NaN),new Rt(t,n,e,r)}function zt(t){return t instanceof St||(t=At(t)),t?(t=t.rgb(),new Rt(t.r,t.g,t.b,t.opacity)):new Rt}function Pt(t,n,e,r){return 1===arguments.length?zt(t):new Rt(t,n,e,null==r?1:r)}function Rt(t,n,e,r){this.r=+t,this.g=+n,this.b=+e,this.opacity=+r}function Lt(t,n,e,r){return r<=0?t=n=e=NaN:e<=0||e>=1?t=n=NaN:n<=0&&(t=NaN),new Ut(t,n,e,r)}function Dt(t){if(t instanceof Ut)return new Ut(t.h,t.s,t.l,t.opacity);if(t instanceof St||(t=At(t)),!t)return new Ut;if(t instanceof Ut)return t;t=t.rgb();var n=t.r/255,e=t.g/255,r=t.b/255,i=Math.min(n,e,r),o=Math.max(n,e,r),u=NaN,a=o-i,c=(o+i)/2;return a?(u=n===o?(e-r)/a+6*(e0&&c<1?0:u,new Ut(u,a,c,t.opacity)}function qt(t,n,e,r){return 1===arguments.length?Dt(t):new Ut(t,n,e,null==r?1:r)}function Ut(t,n,e,r){this.h=+t,this.s=+n,this.l=+e,this.opacity=+r}function Ot(t,n,e){return 255*(t<60?n+(e-n)*t/60:t<180?e:t<240?n+(e-n)*(240-t)/60:n)}function Ft(t){if(t instanceof It)return new It(t.l,t.a,t.b,t.opacity);if(t instanceof $t){var n=t.h*zh;return new It(t.l,Math.cos(n)*t.c,Math.sin(n)*t.c,t.opacity)}t instanceof Rt||(t=zt(t));var e=Xt(t.r),r=Xt(t.g),i=Xt(t.b),o=Ht((.4124564*e+.3575761*r+.1804375*i)/Rh),u=Ht((.2126729*e+.7151522*r+.072175*i)/Lh);return new It(116*u-16,500*(o-u),200*(u-Ht((.0193339*e+.119192*r+.9503041*i)/Dh)),t.opacity)}function Yt(t,n,e,r){return 1===arguments.length?Ft(t):new It(t,n,e,null==r?1:r)}function It(t,n,e,r){this.l=+t,this.a=+n,this.b=+e,this.opacity=+r}function Ht(t){return t>Fh?Math.pow(t,1/3):t/Oh+qh}function Bt(t){return t>Uh?t*t*t:Oh*(t-qh)}function jt(t){return 255*(t<=.0031308?12.92*t:1.055*Math.pow(t,1/2.4)-.055)}function Xt(t){return(t/=255)<=.04045?t/12.92:Math.pow((t+.055)/1.055,2.4)}function Wt(t){if(t instanceof $t)return new $t(t.h,t.c,t.l,t.opacity);t instanceof It||(t=Ft(t));var n=Math.atan2(t.b,t.a)*Ph;return new $t(n<0?n+360:n,Math.sqrt(t.a*t.a+t.b*t.b),t.l,t.opacity)}function Vt(t,n,e,r){return 1===arguments.length?Wt(t):new $t(t,n,e,null==r?1:r)}function $t(t,n,e,r){this.h=+t,this.c=+n,this.l=+e,this.opacity=+r}function Zt(t){if(t instanceof Qt)return new Qt(t.h,t.s,t.l,t.opacity);t instanceof Rt||(t=zt(t));var n=t.r/255,e=t.g/255,r=t.b/255,i=(Vh*r+Xh*n-Wh*e)/(Vh+Xh-Wh),o=r-i,u=(jh*(e-i)-Hh*o)/Bh,a=Math.sqrt(u*u+o*o)/(jh*i*(1-i)),c=a?Math.atan2(u,o)*Ph-120:NaN;return new Qt(c<0?c+360:c,a,i,t.opacity)}function Gt(t,n,e,r){return 1===arguments.length?Zt(t):new Qt(t,n,e,null==r?1:r)}function Qt(t,n,e,r){this.h=+t,this.s=+n,this.l=+e,this.opacity=+r}function Jt(t,n,e,r,i){var o=t*t,u=o*t;return((1-3*t+3*o-u)*n+(4-6*o+3*u)*e+(1+3*t+3*o-3*u)*r+u*i)/6}function Kt(t,n){return function(e){return t+e*n}}function tn(t,n,e){return t=Math.pow(t,e),n=Math.pow(n,e)-t,e=1/e,function(r){return Math.pow(t+r*n,e)}}function nn(t,n){var e=n-t;return e?Kt(t,e>180||e<-180?e-360*Math.round(e/360):e):ep(isNaN(t)?n:t)}function en(t){return 1==(t=+t)?rn:function(n,e){return e-n?tn(n,e,t):ep(isNaN(n)?e:n)}}function rn(t,n){var e=n-t;return e?Kt(t,e):ep(isNaN(t)?n:t)}function on(t){return function(n){var e,r,i=n.length,o=new Array(i),u=new Array(i),a=new Array(i);for(e=0;e180?n+=360:n-t>180&&(t+=360),o.push({i:e.push(i(e)+"rotate(",null,r)-2,x:cp(t,n)})):n&&e.push(i(e)+"rotate("+n+r)}function a(t,n,e,o){t!==n?o.push({i:e.push(i(e)+"skewX(",null,r)-2,x:cp(t,n)}):n&&e.push(i(e)+"skewX("+n+r)}function c(t,n,e,r,o,u){if(t!==e||n!==r){var a=o.push(i(o)+"scale(",null,",",null,")");u.push({i:a-4,x:cp(t,e)},{i:a-2,x:cp(n,r)})}else 1===e&&1===r||o.push(i(o)+"scale("+e+","+r+")")}return function(n,e){var r=[],i=[];return n=t(n),e=t(e),o(n.translateX,n.translateY,e.translateX,e.translateY,r,i),u(n.rotate,e.rotate,r,i),a(n.skewX,e.skewX,r,i),c(n.scaleX,n.scaleY,e.scaleX,e.scaleY,r,i),n=e=null,function(t){for(var n,e=-1,o=i.length;++e=0&&n._call.call(null,t),n=n._next;--Ep}function Mn(){Lp=(Rp=qp.now())+Dp,Ep=Cp=0;try{wn()}finally{Ep=0,Nn(),Lp=0}}function Tn(){var t=qp.now(),n=t-Rp;n>Pp&&(Dp-=n,Rp=t)}function Nn(){for(var t,n,e=Jh,r=1/0;e;)e._call?(r>e._time&&(r=e._time),t=e,e=e._next):(n=e._next,e._next=null,e=t?t._next=n:Jh=n);Kh=t,kn(r)}function kn(t){if(!Ep){Cp&&(Cp=clearTimeout(Cp));t-Lp>24?(t<1/0&&(Cp=setTimeout(Mn,t-qp.now()-Dp)),zp&&(zp=clearInterval(zp))):(zp||(Rp=qp.now(),zp=setInterval(Tn,Pp)),Ep=1,Up(Mn))}}function Sn(t,n){var e=En(t,n);if(e.state>Hp)throw new Error("too late; already scheduled");return e}function An(t,n){var e=En(t,n);if(e.state>jp)throw new Error("too late; already started");return e}function En(t,n){var e=t.__transition;if(!e||!(e=e[n]))throw new Error("transition not found");return e}function Cn(t,n,e){function r(t){e.state=Bp,e.timer.restart(i,e.delay,e.time),e.delay<=t&&i(t-e.delay)}function i(r){var s,f,l,h;if(e.state!==Bp)return u();for(s in c)if(h=c[s],h.name===e.name){if(h.state===Xp)return Op(i);h.state===Wp?(h.state=$p,h.timer.stop(),h.on.call("interrupt",t,t.__data__,h.index,h.group),delete c[s]):+s=0&&(t=t.slice(0,n)),!t||"start"===t})}function $n(t,n,e){var r,i,o=Vn(n)?Sn:An;return function(){var u=o(this,t),a=u.on;a!==r&&(i=(r=a).copy()).on(n,e),u.on=i}}function Zn(t){return function(){var n=this.parentNode;for(var e in this.__transition)if(+e!==t)return;n&&n.removeChild(this)}}function Gn(t,n){var e,r,i;return function(){var o=W(this,t),u=(this.style.removeProperty(t),W(this,t));return o===u?null:o===e&&u===r?i:i=n(e=o,r=u)}}function Qn(t){return function(){this.style.removeProperty(t)}}function Jn(t,n,e){var r,i;return function(){var o=W(this,t);return o===e?null:o===r?i:i=n(r=o,e)}}function Kn(t,n,e){var r,i,o;return function(){var u=W(this,t),a=e(this);return null==a&&(this.style.removeProperty(t),a=W(this,t)),u===a?null:u===r&&a===i?o:o=n(r=u,i=a)}}function te(t,n,e){function r(){var r=this,i=n.apply(r,arguments);return i&&function(n){r.style.setProperty(t,i(n),e)}}return r._value=n,r}function ne(t){return function(){this.textContent=t}}function ee(t){return function(){var n=t(this);this.textContent=null==n?"":n}}function re(t,n,e,r){this._groups=t,this._parents=n,this._name=e,this._id=r}function ie(t){return _t().transition(t)}function oe(){return++yd}function ue(t){return+t}function ae(t){return t*t}function ce(t){return t*(2-t)}function se(t){return((t*=2)<=1?t*t:--t*(2-t)+1)/2}function fe(t){return t*t*t}function le(t){return--t*t*t+1}function he(t){return((t*=2)<=1?t*t*t:(t-=2)*t*t+2)/2}function pe(t){return 1-Math.cos(t*Md)}function de(t){return Math.sin(t*Md)}function ve(t){return(1-Math.cos(wd*t))/2}function ge(t){return Math.pow(2,10*t-10)}function ye(t){return 1-Math.pow(2,-10*t)}function _e(t){return((t*=2)<=1?Math.pow(2,10*t-10):2-Math.pow(2,10-10*t))/2}function me(t){return 1-Math.sqrt(1-t*t)}function xe(t){return Math.sqrt(1- --t*t)}function be(t){return((t*=2)<=1?1-Math.sqrt(1-t*t):Math.sqrt(1-(t-=2)*t)+1)/2}function we(t){return 1-Me(1-t)}function Me(t){return(t=+t)Math.abs(t[1]-O[1])?M=!0:w=!0),O=t,b=!0,Vd(),o()}function o(){var t;switch(m=O[0]-U[0],x=O[1]-U[1],k){case Zd:case $d:S&&(m=Math.max(P-l,Math.min(L-v,m)),h=l+m,g=v+m),A&&(x=Math.max(R-p,Math.min(D-y,x)),d=p+x,_=y+x);break;case Gd:S<0?(m=Math.max(P-l,Math.min(L-l,m)),h=l+m,g=v):S>0&&(m=Math.max(P-v,Math.min(L-v,m)),h=l,g=v+m),A<0?(x=Math.max(R-p,Math.min(D-p,x)),d=p+x,_=y):A>0&&(x=Math.max(R-y,Math.min(D-y,x)),d=p,_=y+x);break;case Qd:S&&(h=Math.max(P,Math.min(L,l-m*S)),g=Math.max(P,Math.min(L,v+m*S))),A&&(d=Math.max(R,Math.min(D,p-x*A)),_=Math.max(R,Math.min(D,y+x*A)))}g0&&(l=h-m),A<0?y=_-x:A>0&&(p=d-x),k=Zd,I.attr("cursor",nv.selection),o());break;default:return}Vd()}function s(){switch(t.event.keyCode){case 16:q&&(w=M=q=!1,o());break;case 18:k===Qd&&(S<0?v=g:S>0&&(l=h),A<0?y=_:A>0&&(p=d),k=Gd,o());break;case 32:k===Zd&&(t.event.altKey?(S&&(v=g-m*S,l=h+m*S),A&&(y=_-x*A,p=d+x*A),k=Qd):(S<0?v=g:S>0&&(l=h),A<0?y=_:A>0&&(p=d),k=Gd),I.attr("cursor",nv[N]),o());break;default:return}Vd()}if(t.event.touches){if(t.event.changedTouches.length=(o=(v+y)/2))?v=o:y=o,(f=e>=(u=(g+_)/2))?g=u:_=u,i=p,!(p=p[l=f<<1|s]))return i[l]=d,t;if(a=+t._x.call(null,p.data),c=+t._y.call(null,p.data),n===a&&e===c)return d.next=p,i?i[l]=d:t._root=d,t;do{i=i?i[l]=new Array(4):t._root=new Array(4),(s=n>=(o=(v+y)/2))?v=o:y=o,(f=e>=(u=(g+_)/2))?g=u:_=u}while((l=f<<1|s)==(h=(c>=u)<<1|a>=o));return i[h]=p,i[l]=d,t}function er(t){var n,e,r,i,o=t.length,u=new Array(o),a=new Array(o),c=1/0,s=1/0,f=-1/0,l=-1/0;for(e=0;ef&&(f=r),il&&(l=i));for(f",i=n[3]||"-",o=n[4]||"",u=!!n[5],a=n[6]&&+n[6],c=!!n[7],s=n[8]&&+n[8].slice(1),f=n[9]||"" -;"n"===f?(c=!0,f="g"):bg[f]||(f=""),(u||"0"===e&&"="===r)&&(u=!0,e="0",r="="),this.fill=e,this.align=r,this.sign=i,this.symbol=o,this.zero=u,this.width=a,this.comma=c,this.precision=s,this.type=f}function yr(n){return Mg=kg(n),t.format=Mg.format,t.formatPrefix=Mg.formatPrefix,Mg}function _r(){this.reset()}function mr(t,n,e){var r=t.s=n+e,i=r-n,o=r-i;t.t=n-o+(e-i)}function xr(t){return t>1?0:t<-1?fy:Math.acos(t)}function br(t){return t>1?ly:t<-1?-ly:Math.asin(t)}function wr(t){return(t=Ty(t/2))*t}function Mr(){}function Tr(t,n){t&&Ey.hasOwnProperty(t.type)&&Ey[t.type](t,n)}function Nr(t,n,e){var r,i=-1,o=t.length-e;for(n.lineStart();++i=0?1:-1,i=r*e,o=my(n),u=Ty(n),a=Dg*u,c=Lg*o+a*my(i),s=a*r*Ty(i);zy.add(_y(s,c)),Rg=t,Lg=o,Dg=u}function zr(t){return[_y(t[1],t[0]),br(t[2])]}function Pr(t){var n=t[0],e=t[1],r=my(e);return[r*my(n),r*Ty(n),Ty(e)]}function Rr(t,n){return t[0]*n[0]+t[1]*n[1]+t[2]*n[2]}function Lr(t,n){return[t[1]*n[2]-t[2]*n[1],t[2]*n[0]-t[0]*n[2],t[0]*n[1]-t[1]*n[0]]}function Dr(t,n){t[0]+=n[0],t[1]+=n[1],t[2]+=n[2]}function qr(t,n){return[t[0]*n,t[1]*n,t[2]*n]}function Ur(t){var n=ky(t[0]*t[0]+t[1]*t[1]+t[2]*t[2]);t[0]/=n,t[1]/=n,t[2]/=n}function Or(t,n){jg.push(Xg=[qg=t,Og=t]),nFg&&(Fg=n)}function Fr(t,n){var e=Pr([t*vy,n*vy]);if(Bg){var r=Lr(Bg,e),i=[r[1],-r[0],0],o=Lr(i,r);Ur(o),o=zr(o);var u,a=t-Yg,c=a>0?1:-1,s=o[0]*dy*c,f=gy(a)>180;f^(c*YgFg&&(Fg=u):(s=(s+360)%360-180,f^(c*YgFg&&(Fg=n))),f?tXr(qg,Og)&&(Og=t):Xr(t,Og)>Xr(qg,Og)&&(qg=t):Og>=qg?(tOg&&(Og=t)):t>Yg?Xr(qg,t)>Xr(qg,Og)&&(Og=t):Xr(t,Og)>Xr(qg,Og)&&(qg=t)}else jg.push(Xg=[qg=t,Og=t]);nFg&&(Fg=n),Bg=e,Yg=t}function Yr(){qy.point=Fr}function Ir(){Xg[0]=qg,Xg[1]=Og,qy.point=Or,Bg=null}function Hr(t,n){if(Bg){var e=t-Yg;Dy.add(gy(e)>180?e+(e>0?360:-360):e)}else Ig=t,Hg=n;Ry.point(t,n),Fr(t,n)}function Br(){Ry.lineStart()}function jr(){Hr(Ig,Hg),Ry.lineEnd(),gy(Dy)>sy&&(qg=-(Og=180)),Xg[0]=qg,Xg[1]=Og,Bg=null}function Xr(t,n){return(n-=t)<0?n+360:n}function Wr(t,n){return t[0]-n[0]}function Vr(t,n){return t[0]<=t[1]?t[0]<=n&&n<=t[1]:nfy?t-py:t<-fy?t+py:t,n]}function oi(t,n,e){return(t%=py)?n||e?Iy(ai(t),ci(n,e)):ai(t):n||e?ci(n,e):ii}function ui(t){return function(n,e){return n+=t,[n>fy?n-py:n<-fy?n+py:n,e]}}function ai(t){var n=ui(t);return n.invert=ui(-t),n}function ci(t,n){function e(t,n){var e=my(n),a=my(t)*e,c=Ty(t)*e,s=Ty(n),f=s*r+a*i;return[_y(c*o-f*u,a*r-s*i),br(f*o+c*u)]}var r=my(t),i=Ty(t),o=my(n),u=Ty(n);return e.invert=function(t,n){var e=my(n),a=my(t)*e,c=Ty(t)*e,s=Ty(n),f=s*o-c*u;return[_y(c*o+s*u,a*r+f*i),br(f*r-a*i)]},e}function si(t,n,e,r,i,o){if(e){var u=my(n),a=Ty(n),c=r*e;null==i?(i=n+r*py,o=n-c/2):(i=fi(u,i),o=fi(u,o),(r>0?io)&&(i+=r*py));for(var s,f=i;r>0?f>o:f1}function di(t,n){return((t=t.x)[0]<0?t[1]-ly-sy:ly-t[1])-((n=n.x)[0]<0?n[1]-ly-sy:ly-n[1])}function vi(t){var n,e=NaN,r=NaN,i=NaN;return{lineStart:function(){t.lineStart(),n=1},point:function(o,u){var a=o>0?fy:-fy,c=gy(o-e);gy(c-fy)0?ly:-ly),t.point(i,r),t.lineEnd(),t.lineStart(),t.point(a,r),t.point(o,r),n=0):i!==a&&c>=fy&&(gy(e-i)sy?yy((Ty(n)*(o=my(r))*Ty(e)-Ty(r)*(i=my(n))*Ty(t))/(i*o*u)):(n+r)/2}function yi(t,n,e,r){var i;if(null==t)i=e*ly,r.point(-fy,i),r.point(0,i),r.point(fy,i),r.point(fy,0),r.point(fy,-i),r.point(0,-i),r.point(-fy,-i),r.point(-fy,0),r.point(-fy,i);else if(gy(t[0]-n[0])>sy){var o=t[0]0)do{s.point(0===f||3===f?t:e,f>1?r:n)}while((f=(f+a+4)%4)!==l);else s.point(o[0],o[1])}function u(r,i){return gy(r[0]-t)0?0:3:gy(r[0]-e)0?2:1:gy(r[1]-n)0?1:0:i>0?3:2}function a(t,n){return c(t.x,n.x)}function c(t,n){var e=u(t,1),r=u(n,1);return e!==r?e-r:0===e?n[1]-t[1]:1===e?t[0]-n[0]:2===e?t[1]-n[1]:n[0]-t[0]}return function(u){function c(t,n){i(t,n)&&k.point(t,n)}function s(){for(var n=0,e=0,i=g.length;er&&(l-o)*(r-u)>(h-u)*(t-o)&&++n:h<=r&&(l-o)*(r-u)<(h-u)*(t-o)&&--n;return n}function f(){k=S,v=[],g=[],N=!0}function l(){var t=s(),n=N&&t,e=(v=Kf(v)).length;(n||e)&&(u.polygonStart(),n&&(u.lineStart(),o(null,null,1,u),u.lineEnd()),e&&r_(v,a,t,o,u),u.polygonEnd()),k=u,v=g=y=null}function h(){A.point=d,g&&g.push(y=[]),T=!0,M=!1,b=w=NaN}function p(){v&&(d(_,m),x&&M&&S.rejoin(),v.push(S.result())),A.point=c,M&&k.lineEnd()}function d(o,u){var a=i(o,u);if(g&&y.push([o,u]),T)_=o,m=u,x=a,T=!1,a&&(k.lineStart(),k.point(o,u));else if(a&&M)k.point(o,u);else{var c=[b=Math.max(l_,Math.min(f_,b)),w=Math.max(l_,Math.min(f_,w))],s=[o=Math.max(l_,Math.min(f_,o)),u=Math.max(l_,Math.min(f_,u))];s_(c,s,t,n,e,r)?(M||(k.lineStart(),k.point(c[0],c[1])),k.point(s[0],s[1]),a||k.lineEnd(),N=!1):a&&(k.lineStart(),k.point(o,u),N=!1)}b=o,w=u,M=a}var v,g,y,_,m,x,b,w,M,T,N,k=u,S=n_(),A={point:c,lineStart:h,lineEnd:p,polygonStart:f,polygonEnd:l};return A}}function mi(){d_.point=bi,d_.lineEnd=xi}function xi(){d_.point=d_.lineEnd=Mr}function bi(t,n){t*=vy,n*=vy,Hy=t,By=Ty(n),jy=my(n),d_.point=wi}function wi(t,n){t*=vy,n*=vy;var e=Ty(n),r=my(n),i=gy(t-Hy),o=my(i),u=Ty(i),a=r*u,c=jy*e-By*r*o,s=By*e+jy*r*o;p_.add(_y(ky(a*a+c*c),s)),Hy=t,By=e,jy=r}function Mi(t,n){return!(!t||!x_.hasOwnProperty(t.type))&&x_[t.type](t,n)}function Ti(t,n){return 0===__(t,n)}function Ni(t,n){var e=__(t[0],t[1]);return __(t[0],n)+__(n,t[1])<=e+sy}function ki(t,n){return!!o_(t.map(Si),Ai(n))}function Si(t){return t=t.map(Ai),t.pop(),t}function Ai(t){return[t[0]*vy,t[1]*vy]}function Ei(t,n,e){var r=Yf(t,n-sy,e).concat(n);return function(t){return r.map(function(n){return[t,n]})}}function Ci(t,n,e){var r=Yf(t,n-sy,e).concat(n);return function(t){return r.map(function(n){return[n,t]})}}function zi(){function t(){return{type:"MultiLineString",coordinates:n()}}function n(){return Yf(xy(o/g)*g,i,g).map(h).concat(Yf(xy(s/y)*y,c,y).map(p)).concat(Yf(xy(r/d)*d,e,d).filter(function(t){return gy(t%g)>sy}).map(f)).concat(Yf(xy(a/v)*v,u,v).filter(function(t){return gy(t%y)>sy}).map(l))}var e,r,i,o,u,a,c,s,f,l,h,p,d=10,v=d,g=90,y=360,_=2.5;return t.lines=function(){return n().map(function(t){return{type:"LineString",coordinates:t}})},t.outline=function(){return{type:"Polygon",coordinates:[h(o).concat(p(c).slice(1),h(i).reverse().slice(1),p(s).reverse().slice(1))]}},t.extent=function(n){return arguments.length?t.extentMajor(n).extentMinor(n):t.extentMinor()},t.extentMajor=function(n){return arguments.length?(o=+n[0][0],i=+n[1][0],s=+n[0][1],c=+n[1][1],o>i&&(n=o,o=i,i=n),s>c&&(n=s,s=c,c=n),t.precision(_)):[[o,s],[i,c]]},t.extentMinor=function(n){return arguments.length?(r=+n[0][0],e=+n[1][0],a=+n[0][1],u=+n[1][1],r>e&&(n=r,r=e,e=n),a>u&&(n=a,a=u,u=n),t.precision(_)):[[r,a],[e,u]]},t.step=function(n){return arguments.length?t.stepMajor(n).stepMinor(n):t.stepMinor()},t.stepMajor=function(n){return arguments.length?(g=+n[0],y=+n[1],t):[g,y]},t.stepMinor=function(n){return arguments.length?(d=+n[0],v=+n[1],t):[d,v]},t.precision=function(n){return arguments.length?(_=+n,f=Ei(a,u,90),l=Ci(r,e,_),h=Ei(s,c,90),p=Ci(o,i,_),t):_},t.extentMajor([[-180,-90+sy],[180,90-sy]]).extentMinor([[-180,-80-sy],[180,80+sy]])}function Pi(){return zi()()}function Ri(){k_.point=Li}function Li(t,n){k_.point=Di,Xy=Vy=t,Wy=$y=n}function Di(t,n){N_.add($y*t-Vy*n),Vy=t,$y=n}function qi(){Di(Xy,Wy)}function Ui(t,n){tE_&&(E_=t),nC_&&(C_=n)}function Oi(t,n){P_+=t,R_+=n,++L_}function Fi(){I_.point=Yi}function Yi(t,n){I_.point=Ii,Oi(Qy=t,Jy=n)}function Ii(t,n){var e=t-Qy,r=n-Jy,i=ky(e*e+r*r);D_+=i*(Qy+t)/2,q_+=i*(Jy+n)/2,U_+=i,Oi(Qy=t,Jy=n)}function Hi(){I_.point=Oi}function Bi(){I_.point=Xi}function ji(){Wi(Zy,Gy)}function Xi(t,n){I_.point=Wi,Oi(Zy=Qy=t,Gy=Jy=n)}function Wi(t,n){var e=t-Qy,r=n-Jy,i=ky(e*e+r*r);D_+=i*(Qy+t)/2,q_+=i*(Jy+n)/2,U_+=i,i=Jy*t-Qy*n,O_+=i*(Qy+t),F_+=i*(Jy+n),Y_+=3*i,Oi(Qy=t,Jy=n)}function Vi(t){this._context=t}function $i(t,n){$_.point=Zi,B_=X_=t,j_=W_=n}function Zi(t,n){X_-=t,W_-=n,V_.add(ky(X_*X_+W_*W_)),X_=t,W_=n}function Gi(){this._string=[]}function Qi(t){return"m0,"+t+"a"+t+","+t+" 0 1,1 0,"+-2*t+"a"+t+","+t+" 0 1,1 0,"+2*t+"z"}function Ji(t){return function(n){var e=new Ki;for(var r in t)e[r]=t[r];return e.stream=n,e}}function Ki(){}function to(t,n,e){var r=t.clipExtent&&t.clipExtent();return t.scale(150).translate([0,0]),null!=r&&t.clipExtent(null),Cy(e,t.stream(z_)),n(z_.result()),null!=r&&t.clipExtent(r),t}function no(t,n,e){return to(t,function(e){var r=n[1][0]-n[0][0],i=n[1][1]-n[0][1],o=Math.min(r/(e[1][0]-e[0][0]),i/(e[1][1]-e[0][1])),u=+n[0][0]+(r-o*(e[1][0]+e[0][0]))/2,a=+n[0][1]+(i-o*(e[1][1]+e[0][1]))/2;t.scale(150*o).translate([u,a])},e)}function eo(t,n,e){return no(t,[[0,0],n],e)}function ro(t,n,e){return to(t,function(e){var r=+n,i=r/(e[1][0]-e[0][0]),o=(r-i*(e[1][0]+e[0][0]))/2,u=-i*e[0][1];t.scale(150*i).translate([o,u])},e)}function io(t,n,e){return to(t,function(e){var r=+n,i=r/(e[1][1]-e[0][1]),o=-i*e[0][0],u=(r-i*(e[1][1]+e[0][1]))/2;t.scale(150*i).translate([o,u])},e)}function oo(t){return Ji({point:function(n,e){n=t(n,e),this.stream.point(n[0],n[1])}})}function uo(t,n){function e(r,i,o,u,a,c,s,f,l,h,p,d,v,g){var y=s-r,_=f-i,m=y*y+_*_;if(m>4*n&&v--){var x=u+h,b=a+p,w=c+d,M=ky(x*x+b*b+w*w),T=br(w/=M),N=gy(gy(w)-1)n||gy((y*E+_*C)/m-.5)>.3||u*h+a*p+c*d2?t[2]%360*vy:0,i()):[b*dy,w*dy,M*dy]},n.precision=function(t){return arguments.length?(E=K_(r,A=t*t),o()):ky(A)},n.fitExtent=function(t,e){return no(n,t,e)},n.fitSize=function(t,e){return eo(n,t,e)},n.fitWidth=function(t,e){return ro(n,t,e)},n.fitHeight=function(t,e){return io(n,t,e)},function(){return u=t.apply(this,arguments),n.invert=u.invert&&e,i()}}function fo(t){var n=0,e=fy/3,r=so(t),i=r(n,e);return i.parallels=function(t){return arguments.length?r(n=t[0]*vy,e=t[1]*vy):[n*dy,e*dy]},i}function lo(t){function n(t,n){return[t*e,Ty(n)/e]}var e=my(t);return n.invert=function(t,n){return[t/e,br(n*e)]},n}function ho(t,n){function e(t,n){var e=ky(o-2*i*Ty(n))/i;return[e*Ty(t*=i),u-e*my(t)]}var r=Ty(t),i=(r+Ty(n))/2;if(gy(i)0?n<-ly+sy&&(n=-ly+sy):n>ly-sy&&(n=ly-sy);var e=o/My(mo(n),i);return[e*Ty(i*t),o-e*my(i*t)]}var r=my(t),i=t===n?Ty(t):wy(r/my(n))/wy(mo(n)/mo(t)),o=r*My(mo(t),i)/i;return i?(e.invert=function(t,n){var e=o-n,r=Ny(i)*ky(t*t+e*e);return[_y(t,gy(e))/i*Ny(e),2*yy(My(o/r,1/i))-ly]},e):yo}function bo(t,n){return[t,n]}function wo(t,n){function e(t,n){var e=o-n,r=i*t;return[e*Ty(r),o-e*my(r)]}var r=my(t),i=t===n?Ty(t):(r-my(n))/(n-t),o=r/i+t;return gy(i)=0;)n+=e[r].value;else n=1;t.value=n}function Uo(t,n){if(t===n)return t;var e=t.ancestors(),r=n.ancestors(),i=null;for(t=e.pop(),n=r.pop();t===n;)i=t,t=e.pop(),n=r.pop();return i}function Oo(t,n){var e,r,i,o,u,a=new Bo(t),c=+t.value&&(a.value=t.value),s=[a];for(null==n&&(n=Yo);e=s.pop();)if(c&&(e.value=+e.data.value),(i=n(e.data))&&(u=i.length))for(e.children=new Array(u),o=u-1;o>=0;--o)s.push(r=e.children[o]=new Bo(i[o])),r.parent=e,r.depth=e.depth+1;return a.eachBefore(Ho)}function Fo(){return Oo(this).eachBefore(Io)}function Yo(t){return t.children}function Io(t){t.data=t.data.data}function Ho(t){var n=0;do{t.height=n}while((t=t.parent)&&t.height<++n)}function Bo(t){this.data=t,this.depth=this.height=0,this.parent=null}function jo(t){for(var n,e,r=t.length;r;)e=Math.random()*r--|0,n=t[r],t[r]=t[e],t[e]=n;return t}function Xo(t,n){var e,r;if($o(n,t))return[n];for(e=0;e0&&e*e>r*r+i*i}function $o(t,n){for(var e=0;ee*e+r*r}function nu(t){var n=t._,e=t.next._,r=n.r+e.r,i=(n.x*e.r+e.x*n.r)/r,o=(n.y*e.r+e.y*n.r)/r;return i*i+o*o}function eu(t){this._=t,this.next=null,this.previous=null}function ru(t){if(!(i=t.length))return 0;var n,e,r,i,o,u,a,c,s,f,l;if(n=t[0],n.x=0,n.y=0,!(i>1))return n.r;if(e=t[1],n.x=-e.r,e.x=n.r,e.y=0,!(i>2))return n.r+e.r;Ko(e,n,r=t[2]),n=new eu(n),e=new eu(e),r=new eu(r),n.next=r.previous=e,e.next=n.previous=r,r.next=e.previous=n;t:for(a=3;a=0;)n=i[o],n.z+=e,n.m+=e,e+=n.s+(r+=n.c)}function _u(t,n,e){return t.a.parent===n.parent?t.a:e}function mu(t,n){this._=t,this.parent=null,this.children=null,this.A=null,this.a=this,this.z=0,this.m=0,this.c=0,this.s=0,this.t=null,this.i=n}function xu(t){for(var n,e,r,i,o,u=new mu(t,0),a=[u];n=a.pop();)if(r=n._.children)for(n.children=new Array(o=r.length),i=o-1;i>=0;--i)a.push(e=n.children[i]=new mu(r[i],i)),e.parent=n;return(u.parent=new mu(null,0)).children=[u],u}function bu(t,n,e,r,i,o){for(var u,a,c,s,f,l,h,p,d,v,g,y=[],_=n.children,m=0,x=0,b=_.length,w=n.value;mh&&(h=a),g=f*f*v,(p=Math.max(h/g,g/l))>d){f-=a;break}d=p}y.push(u={value:f,dice:c1&&Jm(t[e[r-2]],t[e[r-1]],t[i])<=0;)--r;e[r++]=i}return e.slice(0,r)}function Tu(t){this._size=t,this._call=this._error=null,this._tasks=[],this._data=[],this._waiting=this._active=this._ended=this._start=0}function Nu(t){if(!t._start)try{ku(t)}catch(n){if(t._tasks[t._ended+t._active-1])Au(t,n);else if(!t._data)throw n}}function ku(t){for(;t._start=t._waiting&&t._active=0;)if((e=t._tasks[r])&&(t._tasks[r]=null,e.abort))try{e.abort()}catch(n){}t._active=NaN,Eu(t)}function Eu(t){if(!t._active&&t._call){var n=t._data;t._data=void 0,t._call(t._error,n)}}function Cu(t){if(null==t)t=1/0;else if(!((t=+t)>=1))throw new Error("invalid concurrency");return new Tu(t)}function zu(t){return function(n,e){t(null==n?e:null)}}function Pu(t){var n=t.responseType;return n&&"text"!==n?t.response:t.responseText}function Ru(t,n){return function(e){return t(e.responseText,n)}}function Lu(t){function n(n){var o=n+"",u=e.get(o);if(!u){if(i!==Mx)return i;e.set(o,u=r.push(n))}return t[(u-1)%t.length]}var e=Xe(),r=[],i=Mx;return t=null==t?[]:wx.call(t),n.domain=function(t){if(!arguments.length)return r.slice();r=[],e=Xe();for(var i,o,u=-1,a=t.length;++u=e?1:r(t)}}}function Yu(t){return function(n,e){var r=t(n=+n,e=+e);return function(t){return t<=0?n:t>=1?e:r(t)}}}function Iu(t,n,e,r){var i=t[0],o=t[1],u=n[0],a=n[1];return o2?Hu:Iu,o=u=null,r}function r(n){return(o||(o=i(a,c,f?Fu(t):t,s)))(+n)}var i,o,u,a=kx,c=kx,s=pp,f=!1;return r.invert=function(t){return(u||(u=i(c,a,Ou,f?Yu(n):n)))(+t)},r.domain=function(t){return arguments.length?(a=bx.call(t,Nx),e()):a.slice()},r.range=function(t){return arguments.length?(c=wx.call(t),e()):c.slice()},r.rangeRound=function(t){return c=wx.call(t),s=dp,e()},r.clamp=function(t){return arguments.length?(f=!!t,e()):f},r.interpolate=function(t){return arguments.length?(s=t,e()):s},e()}function Xu(t){var n=t.domain;return t.ticks=function(t){var e=n();return jf(e[0],e[e.length-1],null==t?10:t)},t.tickFormat=function(t,e){return Sx(n(),t,e)},t.nice=function(e){null==e&&(e=10);var i,o=n(),u=0,a=o.length-1,c=o[u],s=o[a];return s0?(c=Math.floor(c/i)*i,s=Math.ceil(s/i)*i,i=r(c,s,e)):i<0&&(c=Math.ceil(c*i)/i,s=Math.floor(s*i)/i,i=r(c,s,e)),i>0?(o[u]=Math.floor(c/i)*i,o[a]=Math.ceil(s/i)*i,n(o)):i<0&&(o[u]=Math.ceil(c*i)/i,o[a]=Math.floor(s*i)/i,n(o)),t},t}function Wu(){var t=ju(Ou,cp);return t.copy=function(){return Bu(t,Wu())},Xu(t)}function Vu(){function t(t){return+t}var n=[0,1];return t.invert=t,t.domain=t.range=function(e){return arguments.length?(n=bx.call(e,Nx),t):n.slice()},t.copy=function(){return Vu().domain(n)},Xu(t)}function $u(t,n){return(n=Math.log(n/t))?function(e){return Math.log(e/t)/n}:Tx(n)}function Zu(t,n){return t<0?function(e){return-Math.pow(-n,e)*Math.pow(-t,1-e)}:function(e){return Math.pow(n,e)*Math.pow(t,1-e)}}function Gu(t){return isFinite(t)?+("1e"+t):t<0?0:t}function Qu(t){return 10===t?Gu:t===Math.E?Math.exp:function(n){return Math.pow(t,n)}}function Ju(t){return t===Math.E?Math.log:10===t&&Math.log10||2===t&&Math.log2||(t=Math.log(t),function(n){return Math.log(n)/t})}function Ku(t){return function(n){return-t(-n)}}function ta(){function n(){return o=Ju(i),u=Qu(i),r()[0]<0&&(o=Ku(o),u=Ku(u)),e}var e=ju($u,Zu).domain([1,10]),r=e.domain,i=10,o=Ju(10),u=Qu(10);return e.base=function(t){return arguments.length?(i=+t,n()):i},e.domain=function(t){return arguments.length?(r(t),n()):r()},e.ticks=function(t){var n,e=r(),a=e[0],c=e[e.length-1];(n=c0){for(;hc)break;v.push(l)}}else for(;h=1;--f)if(!((l=s*f)c)break;v.push(l)}}else v=jf(h,p,Math.min(p-h,d)).map(u);return n?v.reverse():v},e.tickFormat=function(n,r){if(null==r&&(r=10===i?".0e":","),"function"!=typeof r&&(r=t.format(r)),n===1/0)return r;null==n&&(n=10);var a=Math.max(1,i*n/e.ticks().length);return function(t){var n=t/u(Math.round(o(t)));return n*i0?i[n-1]:e[0],n=i?[o[i-1],r]:[o[n-1],o[n]]},t.copy=function(){return oa().domain([e,r]).range(u)},Xu(t)}function ua(){function t(t){if(t<=t)return e[kf(n,t,0,r)]}var n=[.5],e=[0,1],r=1;return t.domain=function(i){return arguments.length?(n=wx.call(i),r=Math.min(n.length,e.length-1),t):n.slice()},t.range=function(i){return arguments.length?(e=wx.call(i),r=Math.min(n.length,e.length-1),t):e.slice()},t.invertExtent=function(t){var r=e.indexOf(t);return[n[r-1],n[r]]},t.copy=function(){return ua().domain(n).range(e)},t}function aa(t,n,e,r){function i(n){return t(n=new Date(+n)),n}return i.floor=i,i.ceil=function(e){return t(e=new Date(e-1)),n(e,1),t(e),e},i.round=function(t){var n=i(t),e=i.ceil(t);return t-n0))return a;do{a.push(u=new Date(+e)),n(e,o),t(e)}while(u=n)for(;t(n),!e(n);)n.setTime(n-1)},function(t,r){if(t>=t)if(r<0)for(;++r<=0;)for(;n(t,-1),!e(t););else for(;--r>=0;)for(;n(t,1),!e(t););})},e&&(i.count=function(n,r){return Ex.setTime(+n),Cx.setTime(+r),t(Ex),t(Cx),Math.floor(e(Ex,Cx))},i.every=function(t){return t=Math.floor(t),isFinite(t)&&t>0?t>1?i.filter(r?function(n){return r(n)%t==0 -}:function(n){return i.count(0,n)%t==0}):i:null}),i}function ca(t){return aa(function(n){n.setDate(n.getDate()-(n.getDay()+7-t)%7),n.setHours(0,0,0,0)},function(t,n){t.setDate(t.getDate()+7*n)},function(t,n){return(n-t-(n.getTimezoneOffset()-t.getTimezoneOffset())*Rx)/Lx})}function sa(t){return aa(function(n){n.setUTCDate(n.getUTCDate()-(n.getUTCDay()+7-t)%7),n.setUTCHours(0,0,0,0)},function(t,n){t.setUTCDate(t.getUTCDate()+7*n)},function(t,n){return(n-t)/Lx})}function fa(t){if(0<=t.y&&t.y<100){var n=new Date(-1,t.m,t.d,t.H,t.M,t.S,t.L);return n.setFullYear(t.y),n}return new Date(t.y,t.m,t.d,t.H,t.M,t.S,t.L)}function la(t){if(0<=t.y&&t.y<100){var n=new Date(Date.UTC(-1,t.m,t.d,t.H,t.M,t.S,t.L));return n.setUTCFullYear(t.y),n}return new Date(Date.UTC(t.y,t.m,t.d,t.H,t.M,t.S,t.L))}function ha(t){return{y:t,m:0,d:1,H:0,M:0,S:0,L:0}}function pa(t){function n(t,n){return function(e){var r,i,o,u=[],a=-1,c=0,s=t.length;for(e instanceof Date||(e=new Date(+e));++a53)return null;"w"in u||(u.w=1),"Z"in u?(i=la(ha(u.y)),o=i.getUTCDay(),i=o>4||0===o?db.ceil(i):db(i),i=lb.offset(i,7*(u.V-1)),u.y=i.getUTCFullYear(),u.m=i.getUTCMonth(),u.d=i.getUTCDate()+(u.w+6)%7):(i=n(ha(u.y)),o=i.getDay(),i=o>4||0===o?jx.ceil(i):jx(i),i=Ix.offset(i,7*(u.V-1)),u.y=i.getFullYear(),u.m=i.getMonth(),u.d=i.getDate()+(u.w+6)%7)}else("W"in u||"U"in u)&&("w"in u||(u.w="u"in u?u.u%7:"W"in u?1:0),o="Z"in u?la(ha(u.y)).getUTCDay():n(ha(u.y)).getDay(),u.m=0,u.d="W"in u?(u.w+6)%7+7*u.W-(o+5)%7:u.w+7*u.U-(o+6)%7);return"Z"in u?(u.H+=u.Z/100|0,u.M+=u.Z%100,la(u)):n(u)}}function r(t,n,e,r){for(var i,o,u=0,a=n.length,c=e.length;u=c)return-1;if(37===(i=n.charCodeAt(u++))){if(i=n.charAt(u++),!(o=H[i in Pb?n.charAt(u++):i])||(r=o(t,e,r))<0)return-1}else if(i!=e.charCodeAt(r++))return-1}return r}function i(t,n,e){var r=C.exec(n.slice(e));return r?(t.p=z[r[0].toLowerCase()],e+r[0].length):-1}function o(t,n,e){var r=L.exec(n.slice(e));return r?(t.w=D[r[0].toLowerCase()],e+r[0].length):-1}function u(t,n,e){var r=P.exec(n.slice(e));return r?(t.w=R[r[0].toLowerCase()],e+r[0].length):-1}function a(t,n,e){var r=O.exec(n.slice(e));return r?(t.m=F[r[0].toLowerCase()],e+r[0].length):-1}function c(t,n,e){var r=q.exec(n.slice(e));return r?(t.m=U[r[0].toLowerCase()],e+r[0].length):-1}function s(t,n,e){return r(t,w,n,e)}function f(t,n,e){return r(t,M,n,e)}function l(t,n,e){return r(t,T,n,e)}function h(t){return S[t.getDay()]}function p(t){return k[t.getDay()]}function d(t){return E[t.getMonth()]}function v(t){return A[t.getMonth()]}function g(t){return N[+(t.getHours()>=12)]}function y(t){return S[t.getUTCDay()]}function _(t){return k[t.getUTCDay()]}function m(t){return E[t.getUTCMonth()]}function x(t){return A[t.getUTCMonth()]}function b(t){return N[+(t.getUTCHours()>=12)]}var w=t.dateTime,M=t.date,T=t.time,N=t.periods,k=t.days,S=t.shortDays,A=t.months,E=t.shortMonths,C=ga(N),z=ya(N),P=ga(k),R=ya(k),L=ga(S),D=ya(S),q=ga(A),U=ya(A),O=ga(E),F=ya(E),Y={a:h,A:p,b:d,B:v,c:null,d:Ua,e:Ua,f:Ha,H:Oa,I:Fa,j:Ya,L:Ia,m:Ba,M:ja,p:g,Q:_c,s:mc,S:Xa,u:Wa,U:Va,V:$a,w:Za,W:Ga,x:null,X:null,y:Qa,Y:Ja,Z:Ka,"%":yc},I={a:y,A:_,b:m,B:x,c:null,d:tc,e:tc,f:oc,H:nc,I:ec,j:rc,L:ic,m:uc,M:ac,p:b,Q:_c,s:mc,S:cc,u:sc,U:fc,V:lc,w:hc,W:pc,x:null,X:null,y:dc,Y:vc,Z:gc,"%":yc},H={a:o,A:u,b:a,B:c,c:s,d:Sa,e:Sa,f:Ra,H:Ea,I:Ea,j:Aa,L:Pa,m:ka,M:Ca,p:i,Q:Da,s:qa,S:za,u:ma,U:xa,V:ba,w:_a,W:wa,x:f,X:l,y:Ta,Y:Ma,Z:Na,"%":La};return Y.x=n(M,Y),Y.X=n(T,Y),Y.c=n(w,Y),I.x=n(M,I),I.X=n(T,I),I.c=n(w,I),{format:function(t){var e=n(t+="",Y);return e.toString=function(){return t},e},parse:function(t){var n=e(t+="",fa);return n.toString=function(){return t},n},utcFormat:function(t){var e=n(t+="",I);return e.toString=function(){return t},e},utcParse:function(t){var n=e(t,la);return n.toString=function(){return t},n}}}function da(t,n,e){var r=t<0?"-":"",i=(r?-t:t)+"",o=i.length;return r+(o68?1900:2e3),e+r[0].length):-1}function Na(t,n,e){var r=/^(Z)|([+-]\d\d)(?::?(\d\d))?/.exec(n.slice(e,e+6));return r?(t.Z=r[1]?0:-(r[2]+(r[3]||"00")),e+r[0].length):-1}function ka(t,n,e){var r=Rb.exec(n.slice(e,e+2));return r?(t.m=r[0]-1,e+r[0].length):-1}function Sa(t,n,e){var r=Rb.exec(n.slice(e,e+2));return r?(t.d=+r[0],e+r[0].length):-1}function Aa(t,n,e){var r=Rb.exec(n.slice(e,e+3));return r?(t.m=0,t.d=+r[0],e+r[0].length):-1}function Ea(t,n,e){var r=Rb.exec(n.slice(e,e+2));return r?(t.H=+r[0],e+r[0].length):-1}function Ca(t,n,e){var r=Rb.exec(n.slice(e,e+2));return r?(t.M=+r[0],e+r[0].length):-1}function za(t,n,e){var r=Rb.exec(n.slice(e,e+2));return r?(t.S=+r[0],e+r[0].length):-1}function Pa(t,n,e){var r=Rb.exec(n.slice(e,e+3));return r?(t.L=+r[0],e+r[0].length):-1}function Ra(t,n,e){var r=Rb.exec(n.slice(e,e+6));return r?(t.L=Math.floor(r[0]/1e3),e+r[0].length):-1}function La(t,n,e){var r=Lb.exec(n.slice(e,e+1));return r?e+r[0].length:-1}function Da(t,n,e){var r=Rb.exec(n.slice(e));return r?(t.Q=+r[0],e+r[0].length):-1}function qa(t,n,e){var r=Rb.exec(n.slice(e));return r?(t.Q=1e3*+r[0],e+r[0].length):-1}function Ua(t,n){return da(t.getDate(),n,2)}function Oa(t,n){return da(t.getHours(),n,2)}function Fa(t,n){return da(t.getHours()%12||12,n,2)}function Ya(t,n){return da(1+Ix.count(ob(t),t),n,3)}function Ia(t,n){return da(t.getMilliseconds(),n,3)}function Ha(t,n){return Ia(t,n)+"000"}function Ba(t,n){return da(t.getMonth()+1,n,2)}function ja(t,n){return da(t.getMinutes(),n,2)}function Xa(t,n){return da(t.getSeconds(),n,2)}function Wa(t){var n=t.getDay();return 0===n?7:n}function Va(t,n){return da(Bx.count(ob(t),t),n,2)}function $a(t,n){var e=t.getDay();return t=e>=4||0===e?Vx(t):Vx.ceil(t),da(Vx.count(ob(t),t)+(4===ob(t).getDay()),n,2)}function Za(t){return t.getDay()}function Ga(t,n){return da(jx.count(ob(t),t),n,2)}function Qa(t,n){return da(t.getFullYear()%100,n,2)}function Ja(t,n){return da(t.getFullYear()%1e4,n,4)}function Ka(t){var n=t.getTimezoneOffset();return(n>0?"-":(n*=-1,"+"))+da(n/60|0,"0",2)+da(n%60,"0",2)}function tc(t,n){return da(t.getUTCDate(),n,2)}function nc(t,n){return da(t.getUTCHours(),n,2)}function ec(t,n){return da(t.getUTCHours()%12||12,n,2)}function rc(t,n){return da(1+lb.count(Eb(t),t),n,3)}function ic(t,n){return da(t.getUTCMilliseconds(),n,3)}function oc(t,n){return ic(t,n)+"000"}function uc(t,n){return da(t.getUTCMonth()+1,n,2)}function ac(t,n){return da(t.getUTCMinutes(),n,2)}function cc(t,n){return da(t.getUTCSeconds(),n,2)}function sc(t){var n=t.getUTCDay();return 0===n?7:n}function fc(t,n){return da(pb.count(Eb(t),t),n,2)}function lc(t,n){var e=t.getUTCDay();return t=e>=4||0===e?yb(t):yb.ceil(t),da(yb.count(Eb(t),t)+(4===Eb(t).getUTCDay()),n,2)}function hc(t){return t.getUTCDay()}function pc(t,n){return da(db.count(Eb(t),t),n,2)}function dc(t,n){return da(t.getUTCFullYear()%100,n,2)}function vc(t,n){return da(t.getUTCFullYear()%1e4,n,4)}function gc(){return"+0000"}function yc(){return"%"}function _c(t){return+t}function mc(t){return Math.floor(+t/1e3)}function xc(n){return Cb=pa(n),t.timeFormat=Cb.format,t.timeParse=Cb.parse,t.utcFormat=Cb.utcFormat,t.utcParse=Cb.utcParse,Cb}function bc(t){return t.toISOString()}function wc(t){var n=new Date(t);return isNaN(n)?null:n}function Mc(t){return new Date(t)}function Tc(t){return t instanceof Date?+t:+new Date(+t)}function Nc(t,n,e,r,o,u,a,c,s){function f(i){return(a(i)1?0:t<-1?gw:Math.acos(t)}function Ec(t){return t>=1?yw:t<=-1?-yw:Math.asin(t)}function Cc(t){return t.innerRadius}function zc(t){return t.outerRadius}function Pc(t){return t.startAngle}function Rc(t){return t.endAngle}function Lc(t){return t&&t.padAngle}function Dc(t,n,e,r,i,o,u,a){var c=e-t,s=r-n,f=u-i,l=a-o,h=(f*(n-o)-l*(t-i))/(l*c-f*s);return[t+h*c,n+h*s]}function qc(t,n,e,r,i,o,u){var a=t-e,c=n-r,s=(u?o:-o)/dw(a*a+c*c),f=s*c,l=-s*a,h=t+f,p=n+l,d=e+f,v=r+l,g=(h+d)/2,y=(p+v)/2,_=d-h,m=v-p,x=_*_+m*m,b=i-o,w=h*v-d*p,M=(m<0?-1:1)*dw(lw(0,b*b*x-w*w)),T=(w*m-_*M)/x,N=(-w*_-m*M)/x,k=(w*m+_*M)/x,S=(-w*_+m*M)/x,A=T-g,E=N-y,C=k-g,z=S-y;return A*A+E*E>C*C+z*z&&(T=k,N=S),{cx:T,cy:N,x01:-f,y01:-l,x11:T*(i/b-1),y11:N*(i/b-1)}}function Uc(t){this._context=t}function Oc(t){return t[0]}function Fc(t){return t[1]}function Yc(t){this._curve=t}function Ic(t){function n(n){return new Yc(t(n))}return n._curve=t,n}function Hc(t){var n=t.curve;return t.angle=t.x,delete t.x,t.radius=t.y,delete t.y,t.curve=function(t){return arguments.length?n(Ic(t)):n()._curve},t}function Bc(t){return t.source}function jc(t){return t.target}function Xc(t){function n(){var n,a=Cw.call(arguments),c=e.apply(this,a),s=r.apply(this,a);if(u||(u=n=Oe()),t(u,+i.apply(this,(a[0]=c,a)),+o.apply(this,a),+i.apply(this,(a[0]=s,a)),+o.apply(this,a)),n)return u=null,n+""||null}var e=Bc,r=jc,i=Oc,o=Fc,u=null;return n.source=function(t){return arguments.length?(e=t,n):e},n.target=function(t){return arguments.length?(r=t,n):r},n.x=function(t){return arguments.length?(i="function"==typeof t?t:aw(+t),n):i},n.y=function(t){return arguments.length?(o="function"==typeof t?t:aw(+t),n):o},n.context=function(t){return arguments.length?(u=null==t?null:t,n):u},n}function Wc(t,n,e,r,i){t.moveTo(n,e),t.bezierCurveTo(n=(n+r)/2,e,n,i,r,i)}function Vc(t,n,e,r,i){t.moveTo(n,e),t.bezierCurveTo(n,e=(e+i)/2,r,e,r,i)}function $c(t,n,e,r,i){var o=Ew(n,e),u=Ew(n,e=(e+i)/2),a=Ew(r,e),c=Ew(r,i);t.moveTo(o[0],o[1]),t.bezierCurveTo(u[0],u[1],a[0],a[1],c[0],c[1])}function Zc(){return Xc(Wc)}function Gc(){return Xc(Vc)}function Qc(){var t=Xc($c);return t.angle=t.x,delete t.x,t.radius=t.y,delete t.y,t}function Jc(t,n,e){t._context.bezierCurveTo((2*t._x0+t._x1)/3,(2*t._y0+t._y1)/3,(t._x0+2*t._x1)/3,(t._y0+2*t._y1)/3,(t._x0+4*t._x1+n)/6,(t._y0+4*t._y1+e)/6)}function Kc(t){this._context=t}function ts(t){this._context=t}function ns(t){this._context=t}function es(t,n){this._basis=new Kc(t),this._beta=n}function rs(t,n,e){t._context.bezierCurveTo(t._x1+t._k*(t._x2-t._x0),t._y1+t._k*(t._y2-t._y0),t._x2+t._k*(t._x1-n),t._y2+t._k*(t._y1-e),t._x2,t._y2)}function is(t,n){this._context=t,this._k=(1-n)/6}function os(t,n){this._context=t,this._k=(1-n)/6}function us(t,n){this._context=t,this._k=(1-n)/6}function as(t,n,e){var r=t._x1,i=t._y1,o=t._x2,u=t._y2;if(t._l01_a>vw){var a=2*t._l01_2a+3*t._l01_a*t._l12_a+t._l12_2a,c=3*t._l01_a*(t._l01_a+t._l12_a);r=(r*a-t._x0*t._l12_2a+t._x2*t._l01_2a)/c,i=(i*a-t._y0*t._l12_2a+t._y2*t._l01_2a)/c}if(t._l23_a>vw){var s=2*t._l23_2a+3*t._l23_a*t._l12_a+t._l12_2a,f=3*t._l23_a*(t._l23_a+t._l12_a);o=(o*s+t._x1*t._l23_2a-n*t._l12_2a)/f,u=(u*s+t._y1*t._l23_2a-e*t._l12_2a)/f}t._context.bezierCurveTo(r,i,o,u,t._x2,t._y2)}function cs(t,n){this._context=t,this._alpha=n}function ss(t,n){this._context=t,this._alpha=n}function fs(t,n){this._context=t,this._alpha=n}function ls(t){this._context=t}function hs(t){return t<0?-1:1}function ps(t,n,e){var r=t._x1-t._x0,i=n-t._x1,o=(t._y1-t._y0)/(r||i<0&&-0),u=(e-t._y1)/(i||r<0&&-0),a=(o*i+u*r)/(r+i);return(hs(o)+hs(u))*Math.min(Math.abs(o),Math.abs(u),.5*Math.abs(a))||0}function ds(t,n){var e=t._x1-t._x0;return e?(3*(t._y1-t._y0)/e-n)/2:n}function vs(t,n,e){var r=t._x0,i=t._y0,o=t._x1,u=t._y1,a=(o-r)/3;t._context.bezierCurveTo(r+a,i+a*n,o-a,u-a*e,o,u)}function gs(t){this._context=t}function ys(t){this._context=new _s(t)}function _s(t){this._context=t}function ms(t){return new gs(t)}function xs(t){return new ys(t)}function bs(t){this._context=t}function ws(t){var n,e,r=t.length-1,i=new Array(r),o=new Array(r),u=new Array(r);for(i[0]=0,o[0]=2,u[0]=t[0]+2*t[1],n=1;n=0;--n)i[n]=(u[n]-i[n+1])/o[n];for(o[r-1]=(t[r]+i[r-1])/2,n=0;n0)){if(o/=d,d<0){if(o0){if(o>p)return;o>h&&(h=o)}if(o=r-c,d||!(o<0)){if(o/=d,d<0){if(o>p)return;o>h&&(h=o)}else if(d>0){if(o0)){if(o/=v,v<0){if(o0){if(o>p)return;o>h&&(h=o)}if(o=i-s,v||!(o<0)){if(o/=v,v<0){if(o>p)return;o>h&&(h=o)}else if(v>0){if(o0||p<1)||(h>0&&(t[0]=[c+h*d,s+h*v]),p<1&&(t[1]=[c+p*d,s+p*v]),!0)}}}}}function Fs(t,n,e,r,i){var o=t[1];if(o)return!0;var u,a,c=t[0],s=t.left,f=t.right,l=s[0],h=s[1],p=f[0],d=f[1],v=(l+p)/2,g=(h+d)/2;if(d===h){if(v=r)return;if(l>p){if(c){if(c[1]>=i)return}else c=[v,e];o=[v,i]}else{if(c){if(c[1]1)if(l>p){if(c){if(c[1]>=i)return}else c=[(e-a)/u,e];o=[(i-a)/u,i]}else{if(c){if(c[1]=r)return}else c=[n,u*n+a];o=[r,u*r+a]}else{if(c){if(c[0]EM||Math.abs(i[0][1]-i[1][1])>EM)||delete kM[o]}function Is(t){return TM[t.index]={site:t,halfedges:[]}}function Hs(t,n){var e=t.site,r=n.left,i=n.right;return e===i&&(i=r,r=e),i?Math.atan2(i[1]-r[1],i[0]-r[0]):(e===r?(r=n[1],i=n[0]):(r=n[0],i=n[1]),Math.atan2(r[0]-i[0],i[1]-r[1]))}function Bs(t,n){return n[+(n.left!==t.site)]}function js(t,n){return n[+(n.left===t.site)]}function Xs(){for(var t,n,e,r,i=0,o=TM.length;iEM||Math.abs(v-h)>EM)&&(c.splice(a,0,kM.push(qs(u,p,Math.abs(d-t)EM?[t,Math.abs(l-t)EM?[Math.abs(h-r)EM?[e,Math.abs(l-e)EM?[Math.abs(h-n)=-CM)){var p=c*c+s*s,d=f*f+l*l,v=(l*p-s*d)/h,g=(c*d-f*p)/h,y=SM.pop()||new Vs;y.arc=t,y.site=i,y.x=v+u,y.y=(y.cy=g+a)+Math.sqrt(v*v+g*g),t.circle=y;for(var _=null,m=NM._;m;)if(y.yEM)a=a.L;else{if(!((i=o-ef(a,u))>EM)){r>-EM?(n=a.P,e=a):i>-EM?(n=a,e=a.N):n=e=a;break}if(!a.R){n=a;break}a=a.R}Is(t);var c=Qs(t);if(MM.insert(n,c),n||e){if(n===e)return Zs(n),e=Qs(n.site),MM.insert(c,e),c.edge=e.edge=Ds(n.site,c.site),$s(n),void $s(e);if(!e)return void(c.edge=Ds(n.site,c.site));Zs(n),Zs(e);var s=n.site,f=s[0],l=s[1],h=t[0]-f,p=t[1]-l,d=e.site,v=d[0]-f,g=d[1]-l,y=2*(h*g-p*v),_=h*h+p*p,m=v*v+g*g,x=[(g*_-p*m)/y+f,(h*m-v*_)/y+l];Us(e.edge,s,d,x),c.edge=Ds(s,t,null,x),e.edge=Ds(t,d,null,x),$s(n),$s(e)}}function nf(t,n){var e=t.site,r=e[0],i=e[1],o=i-n;if(!o)return r;var u=t.P;if(!u)return-1/0;e=u.site;var a=e[0],c=e[1],s=c-n;if(!s)return a;var f=a-r,l=1/o-1/s,h=f/s;return l?(-h+Math.sqrt(h*h-2*l*(f*f/(-2*s)-c+s/2+i-o/2)))/l+r:(r+a)/2}function ef(t,n){var e=t.N;if(e)return nf(e,n);var r=t.site;return r[1]===n?r[0]:1/0}function rf(t,n,e){return(t[0]-e[0])*(n[1]-t[1])-(t[0]-n[0])*(e[1]-t[1])}function of(t,n){return n[1]-t[1]||n[0]-t[0]}function uf(t,n){var e,r,i,o=t.sort(of).pop();for(kM=[],TM=new Array(t.length),MM=new Cs,NM=new Cs;;)if(i=wM,o&&(!i||o[1]r?(r+i)/2:Math.min(0,r)||Math.max(0,i),u>o?(o+u)/2:Math.min(0,o)||Math.max(0,u))}function yf(){return null}function _f(){for(var t=arguments,n=0,e=t.length;no;h!=n&&(h=n,p.classed("graph-scroll-below",h));var e=!h&&pageYOffset>d;l!=e&&(l=e,p.classed("graph-scroll-fixed",l)),h&&(t=i-1),c!=t&&(a.classed("graph-scroll-active",function(n,e){return e===t}),u.call("active",null,t),c=t)}function e(){s=[];var t;a.each(function(n,e){e||(t=this.getBoundingClientRect().top),s.push(this.getBoundingClientRect().top-t)});var n=p.node().getBoundingClientRect(),e=f.node()?f.node().getBoundingClientRect().height:0;d=n.top+pageYOffset,o=n.bottom-e+pageYOffset}function r(){if(l){var n;switch(t.event.keyCode){case 39:if(t.event.metaKey)return;case 40:case 34:n=t.event.metaKey?1/0:1;break;case 37:if(t.event.metaKey)return;case 38:case 33:n=t.event.metaKey?-1/0:-1;break;case 32:n=t.event.shiftKey?-1:1;break;default:return}var e=Math.max(0,Math.min(c+n,i-1));e!=c&&(fh(document.documentElement).interrupt().transition().duration(500).tween("scroll",function(){var t=cp(pageYOffset,s[e]+d);return function(n){scrollTo(0,t(n))}}),t.event.preventDefault())}}var i,o,u=g("scroll","active"),a=fh("null"),c=NaN,s=[],f=fh("null"),l=null,h=null,p=fh("body"),d=0,v=Math.random(),y=200,_={};return _.container=function(t){return t?(p=t,_):p},_.graph=function(t){return t?(f=t,_):f},_.eventId=function(t){return t?(v=t,_):v},_.sections=function(t){return t?(a=t,i=a.size(),fh(window).on("scroll.gscroll"+v,n).on("resize.gscroll"+v,e).on("keydown.gscroll"+v,r),e(),window["gscrollTimer"+v]&&window["gscrollTimer"+v].stop(),window["gscrollTimer"+v]=bn(n),_):a},_.on=function(){var t=u.on.apply(u,arguments);return t===u?_:t},_.offset=function(t){return t?(y=t,_):y},_}var Mf=function(t,n){return tn?1:t>=n?0:NaN},Tf=function(t){return 1===t.length&&(t=n(t)),{left:function(n,e,r,i){for(null==r&&(r=0),null==i&&(i=n.length);r>>1;t(n[o],e)<0?r=o+1:i=o}return r},right:function(n,e,r,i){for(null==r&&(r=0),null==i&&(i=n.length);r>>1;t(n[o],e)>0?i=o:r=o+1}return r}}},Nf=Tf(Mf),kf=Nf.right,Sf=Nf.left,Af=function(t,n){null==n&&(n=e);for(var r=0,i=t.length-1,o=t[0],u=new Array(i<0?0:i);rt?1:n>=t?0:NaN},zf=function(t){return null===t?NaN:+t},Pf=function(t,n){var e,r,i=t.length,o=0,u=-1,a=0,c=0;if(null==n)for(;++u1)return c/(o-1)},Rf=function(t,n){var e=Pf(t,n);return e?Math.sqrt(e):e},Lf=function(t,n){var e,r,i,o=t.length,u=-1;if(null==n){for(;++u=e)for(r=i=e;++ue&&(r=e),i=e)for(r=i=e;++ue&&(r=e),i0)return[t];if((i=n0)for(t=Math.ceil(t/a),n=Math.floor(n/a),u=new Array(o=Math.ceil(n-t+1));++cl;)h.pop(),--p;var d,v=new Array(p+1);for(o=0;o<=p;++o)d=v[o]=[],d.x0=o>0?h[o-1]:f,d.x1=o=1)return+e(t[r-1],r-1,t);var r,i=(r-1)*n,o=Math.floor(i),u=+e(t[o],o,t);return u+(+e(t[o+1],o+1,t)-u)*(i-o)}},$f=function(t,n,e){return t=Uf.call(t,zf).sort(Mf),Math.ceil((e-n)/(2*(Vf(t,.75)-Vf(t,.25))*Math.pow(t.length,-1/3)))},Zf=function(t,n,e){return Math.ceil((e-n)/(3.5*Rf(t)*Math.pow(t.length,-1/3)))},Gf=function(t,n){var e,r,i=t.length,o=-1;if(null==n){for(;++o=e)for(r=e;++or&&(r=e)}else for(;++o=e)for(r=e;++or&&(r=e);return r},Qf=function(t,n){var e,r=t.length,i=r,o=-1,u=0;if(null==n)for(;++o=0;)for(r=t[i],n=r.length;--n>=0;)e[--u]=r[n];return e},tl=function(t,n){var e,r,i=t.length,o=-1;if(null==n){for(;++o=e)for(r=e;++oe&&(r=e)}else for(;++o=e)for(r=e;++oe&&(r=e);return r},nl=function(t,n){for(var e=n.length,r=new Array(e);e--;)r[e]=t[n[e]];return r},el=function(t,n){if(e=t.length){var e,r,i=0,o=0,u=t[o];for(null==n&&(n=Mf);++i0)for(var e,r,i=new Array(e),o=0;o=0&&"xmlns"!==(n=t.slice(0,e))&&(t=t.slice(e+1)),gl.hasOwnProperty(n)?{space:gl[n],local:t}:t},_l=function(t){var n=yl(t);return(n.local?w:b)(n)},ml=0;T.prototype=M.prototype={constructor:T,get:function(t){for(var n=this._;!(n in t);)if(!(t=t.parentNode))return;return t[n]},set:function(t,n){return t[this._]=n},remove:function(t){return this._ in t&&delete t[this._]},toString:function(){return this._}};var xl=function(t){return function(){return this.matches(t)}};if("undefined"!=typeof document){var bl=document.documentElement;if(!bl.matches){var wl=bl.webkitMatchesSelector||bl.msMatchesSelector||bl.mozMatchesSelector||bl.oMatchesSelector;xl=function(t){return function(){return wl.call(this,t)}}}}var Ml=xl,Tl={};if(t.event=null,"undefined"!=typeof document){"onmouseenter"in document.documentElement||(Tl={mouseenter:"mouseover",mouseleave:"mouseout"})}var Nl=function(t,n,e){var r,i,o=S(t+""),u=o.length;{if(!(arguments.length<2)){for(a=n?E:A,null==e&&(e=!1),r=0;r=x&&(x=m+1);!(_=g[x])&&++x=0;)(r=i[o])&&(u&&u!==r.nextSibling&&u.parentNode.insertBefore(r,u),u=r);return this},Hl=function(t){function n(n,e){return n&&e?t(n.__data__,e.__data__):!n-!e}t||(t=q);for(var e=this._groups,r=e.length,i=new Array(r),o=0;o1?this.each((null==n?B:"function"==typeof n?X:j)(t,n,null==e?"":e)):W(this.node(),t)},Jl=function(t,n){return arguments.length>1?this.each((null==n?V:"function"==typeof n?Z:$)(t,n)):this.node()[t]};J.prototype={add:function(t){this._names.indexOf(t)<0&&(this._names.push(t),this._node.setAttribute("class",this._names.join(" ")))},remove:function(t){var n=this._names.indexOf(t);n>=0&&(this._names.splice(n,1),this._node.setAttribute("class",this._names.join(" ")))},contains:function(t){return this._names.indexOf(t)>=0}};var Kl=function(t,n){var e=G(t+"");if(arguments.length<2){for(var r=Q(this.node()),i=-1,o=e.length;++ib}_.mouse("drag")}function i(){fh(t.event.view).on("mousemove.drag mouseup.drag",null),xt(t.event.view,l),dh(),_.mouse("end")}function o(){if(p.apply(this,arguments)){var n,e,r=t.event.changedTouches,i=d.apply(this,arguments),o=r.length;for(n=0;n=240?t-240:t+120,i,r),Ot(t,i,r),Ot(t<120?t+240:t-120,i,r),this.opacity)},displayable:function(){return(0<=this.s&&this.s<=1||isNaN(this.s))&&0<=this.l&&this.l<=1&&0<=this.opacity&&this.opacity<=1}}));var zh=Math.PI/180,Ph=180/Math.PI,Rh=.95047,Lh=1,Dh=1.08883,qh=4/29,Uh=6/29,Oh=3*Uh*Uh,Fh=Uh*Uh*Uh;_h(It,Yt,kt(St,{brighter:function(t){return new It(this.l+18*(null==t?1:t),this.a,this.b,this.opacity)},darker:function(t){return new It(this.l-18*(null==t?1:t),this.a,this.b,this.opacity)},rgb:function(){var t=(this.l+16)/116,n=isNaN(this.a)?t:t+this.a/500,e=isNaN(this.b)?t:t-this.b/200;return t=Lh*Bt(t),n=Rh*Bt(n),e=Dh*Bt(e),new Rt(jt(3.2404542*n-1.5371385*t-.4985314*e),jt(-.969266*n+1.8760108*t+.041556*e),jt(.0556434*n-.2040259*t+1.0572252*e),this.opacity)}})),_h($t,Vt,kt(St,{brighter:function(t){return new $t(this.h,this.c,this.l+18*(null==t?1:t),this.opacity)},darker:function(t){return new $t(this.h,this.c,this.l-18*(null==t?1:t),this.opacity)},rgb:function(){return Ft(this).rgb()}}));var Yh=-.14861,Ih=1.78277,Hh=-.29227,Bh=-.90649,jh=1.97294,Xh=jh*Bh,Wh=jh*Ih,Vh=Ih*Hh-Bh*Yh;_h(Qt,Gt,kt(St,{brighter:function(t){return t=null==t?1/.7:Math.pow(1/.7,t),new Qt(this.h,this.s,this.l*t,this.opacity)},darker:function(t){return t=null==t?.7:Math.pow(.7,t),new Qt(this.h,this.s,this.l*t,this.opacity)},rgb:function(){var t=isNaN(this.h)?0:(this.h+120)*zh,n=+this.l,e=isNaN(this.s)?0:this.s*n*(1-n),r=Math.cos(t),i=Math.sin(t);return new Rt(255*(n+e*(Yh*r+Ih*i)),255*(n+e*(Hh*r+Bh*i)),255*(n+e*(jh*r)),this.opacity)}}));var $h,Zh,Gh,Qh,Jh,Kh,tp=function(t){var n=t.length-1;return function(e){var r=e<=0?e=0:e>=1?(e=1,n-1):Math.floor(e*n),i=t[r],o=t[r+1],u=r>0?t[r-1]:2*i-o,a=ro&&(i=n.slice(o,i),a[u]?a[u]+=i:a[++u]=i),(e=e[0])===(r=r[0])?a[u]?a[u]+=r:a[++u]=r:(a[++u]=null,c.push({i:u,x:cp(e,r)})),o=lp.lastIndex;return ojp&&e.stateBp&&e.name===n)return new re([[t]],Bd,n,+r)}return null},Xd=function(t){return function(){return t}},Wd=function(t,n,e){this.target=t,this.type=n,this.selection=e},Vd=function(){t.event.preventDefault(),t.event.stopImmediatePropagation()},$d={name:"drag"},Zd={name:"space"},Gd={name:"handle"},Qd={name:"center"},Jd={name:"x",handles:["e","w"].map(Se),input:function(t,n){return t&&[[t[0],n[0][1]],[t[1],n[1][1]]]},output:function(t){return t&&[t[0][0],t[1][0]]}},Kd={name:"y",handles:["n","s"].map(Se),input:function(t,n){return t&&[[n[0][0],t[0]],[n[1][0],t[1]]]},output:function(t){return t&&[t[0][1],t[1][1]]}},tv={name:"xy",handles:["n","e","s","w","nw","ne","se","sw"].map(Se),input:function(t){return t},output:function(t){return t}},nv={overlay:"crosshair",selection:"move",n:"ns-resize",e:"ew-resize",s:"ns-resize",w:"ew-resize",nw:"nwse-resize",ne:"nesw-resize",se:"nwse-resize",sw:"nesw-resize"},ev={e:"w",w:"e",nw:"ne",ne:"nw",se:"sw",sw:"se"},rv={n:"s",s:"n",nw:"sw",ne:"se",se:"ne",sw:"nw"},iv={overlay:1,selection:1,n:null,e:1,s:null,w:-1,nw:-1,ne:1,se:1,sw:-1},ov={overlay:1,selection:1,n:-1,e:null,s:1,w:null,nw:-1,ne:-1,se:1,sw:1},uv=function(){return De(tv)},av=Math.cos,cv=Math.sin,sv=Math.PI,fv=sv/2,lv=2*sv,hv=Math.max,pv=function(){function t(t){var o,u,a,c,s,f,l=t.length,h=[],p=Yf(l),d=[],v=[],g=v.groups=new Array(l),y=new Array(l*l);for(o=0,s=-1;++s1e-6)if(Math.abs(f*a-c*s)>1e-6&&i){var h=e-o,p=r-u,d=a*a+c*c,v=h*h+p*p,g=Math.sqrt(d),y=Math.sqrt(l),_=i*Math.tan((gv-Math.acos((d+l-v)/(2*g*y)))/2),m=_/y,x=_/g;Math.abs(m-1)>1e-6&&(this._+="L"+(t+m*s)+","+(n+m*f)),this._+="A"+i+","+i+",0,0,"+ +(f*h>s*p)+","+(this._x1=t+x*a)+","+(this._y1=n+x*c)}else this._+="L"+(this._x1=t)+","+(this._y1=n);else;},arc:function(t,n,e,r,i,o){t=+t,n=+n,e=+e;var u=e*Math.cos(r),a=e*Math.sin(r),c=t+u,s=n+a,f=1^o,l=o?r-i:i-r;if(e<0)throw new Error("negative radius: "+e);null===this._x1?this._+="M"+c+","+s:(Math.abs(this._x1-c)>1e-6||Math.abs(this._y1-s)>1e-6)&&(this._+="L"+c+","+s),e&&(l<0&&(l=l%yv+yv),l>_v?this._+="A"+e+","+e+",0,1,"+f+","+(t-u)+","+(n-a)+"A"+e+","+e+",0,1,"+f+","+(this._x1=c)+","+(this._y1=s):l>1e-6&&(this._+="A"+e+","+e+",0,"+ +(l>=gv)+","+f+","+(this._x1=t+e*Math.cos(i))+","+(this._y1=n+e*Math.sin(i))))},rect:function(t,n,e,r){this._+="M"+(this._x0=this._x1=+t)+","+(this._y0=this._y1=+n)+"h"+ +e+"v"+ +r+"h"+-e+"Z"},toString:function(){return this._}};var mv=function(){function t(){var t,a=dv.call(arguments),c=n.apply(this,a),s=e.apply(this,a),f=+r.apply(this,(a[0]=c,a)),l=i.apply(this,a)-fv,h=o.apply(this,a)-fv,p=f*av(l),d=f*cv(l),v=+r.apply(this,(a[0]=s,a)),g=i.apply(this,a)-fv,y=o.apply(this,a)-fv;if(u||(u=t=Oe()),u.moveTo(p,d),u.arc(0,0,f,l,h),l===g&&h===y||(u.quadraticCurveTo(0,0,v*av(g),v*cv(g)),u.arc(0,0,v,g,y)),u.quadraticCurveTo(0,0,p,d),u.closePath(),t)return u=null,t+""||null}var n=Fe,e=Ye,r=Ie,i=He,o=Be,u=null;return t.radius=function(n){return arguments.length?(r="function"==typeof n?n:vv(+n),t):r},t.startAngle=function(n){return arguments.length?(i="function"==typeof n?n:vv(+n),t):i},t.endAngle=function(n){return arguments.length?(o="function"==typeof n?n:vv(+n),t):o},t.source=function(e){return arguments.length?(n=e,t):n},t.target=function(n){return arguments.length?(e=n,t):e},t.context=function(n){return arguments.length?(u=null==n?null:n,t):u},t};je.prototype=Xe.prototype={constructor:je,has:function(t){return"$"+t in this},get:function(t){return this["$"+t]},set:function(t,n){return this["$"+t]=n,this},remove:function(t){var n="$"+t;return n in this&&delete this[n]},clear:function(){for(var t in this)"$"===t[0]&&delete this[t]},keys:function(){var t=[];for(var n in this)"$"===n[0]&&t.push(n.slice(1));return t},values:function(){var t=[];for(var n in this)"$"===n[0]&&t.push(this[n]);return t},entries:function(){var t=[];for(var n in this)"$"===n[0]&&t.push({key:n.slice(1),value:this[n]});return t},size:function(){var t=0;for(var n in this)"$"===n[0]&&++t;return t},empty:function(){for(var t in this)if("$"===t[0])return!1;return!0},each:function(t){for(var n in this)"$"===n[0]&&t(this[n],n.slice(1),this)}};var xv=function(){function t(n,i,u,a){if(i>=o.length)return null!=e&&n.sort(e),null!=r?r(n):n;for(var c,s,f,l=-1,h=n.length,p=o[i++],d=Xe(),v=u();++lo.length)return t;var i,a=u[e-1];return null!=r&&e>=o.length?i=t.entries():(i=[],t.each(function(t,r){i.push({key:r,values:n(t,e)})})),null!=a?i.sort(function(t,n){return a(t.key,n.key)}):i}var e,r,i,o=[],u=[];return i={object:function(n){return t(n,0,We,Ve)},map:function(n){return t(n,0,$e,Ze)},entries:function(e){return n(t(e,0,$e,Ze),0)},key:function(t){return o.push(t),i},sortKeys:function(t){return u[o.length-1]=t,i},sortValues:function(t){return e=t,i},rollup:function(t){return r=t,i}}},bv=Xe.prototype;Ge.prototype=Qe.prototype={constructor:Ge,has:bv.has,add:function(t){return t+="",this["$"+t]=t,this},remove:bv.remove,clear:bv.clear,values:bv.keys,size:bv.size,empty:bv.empty,each:bv.each};var wv=function(t){var n=[];for(var e in t)n.push(e);return n},Mv=function(t){var n=[];for(var e in t)n.push(t[e]);return n},Tv=function(t){var n=[];for(var e in t)n.push({key:e,value:t[e]});return n},Nv={},kv={},Sv=34,Av=10,Ev=13,Cv=function(t){function n(t,n){var r,i,o=e(t,function(t,e){if(r)return r(t,e-1);i=t,r=n?Ke(t,n):Je(t)});return o.columns=i||[],o}function e(t,n){function e(){if(s)return kv;if(f)return f=!1,Nv;var n,e,r=u;if(t.charCodeAt(r)===Sv){for(;u++=o?s=!0:(e=t.charCodeAt(u++))===Av?f=!0:e===Ev&&(f=!0,t.charCodeAt(u)===Av&&++u),t.slice(r+1,n-1).replace(/""/g,'"')}for(;ut||t>i||r>n||n>o))return this;var u,a,c=i-e,s=this._root;switch(a=(n<(r+o)/2)<<1|t<(e+i)/2){case 0:do{u=new Array(4),u[a]=s,s=u}while(c*=2,i=e+c,o=r+c,t>i||n>o);break;case 1:do{u=new Array(4),u[a]=s,s=u}while(c*=2,e=i-c,o=r+c,e>t||n>o);break;case 2:do{u=new Array(4),u[a]=s,s=u}while(c*=2,i=e+c,r=o-c,t>i||r>n);break;case 3:do{u=new Array(4),u[a]=s,s=u}while(c*=2,e=i-c,r=o-c,e>t||r>n)}this._root&&this._root.length&&(this._root=s)}return this._x0=e,this._y0=r,this._x1=i,this._y1=o,this},Wv=function(){var t=[];return this.visit(function(n){if(!n.length)do{t.push(n.data)}while(n=n.next)}),t},Vv=function(t){return arguments.length?this.cover(+t[0][0],+t[0][1]).cover(+t[1][0],+t[1][1]):isNaN(this._x0)?void 0:[[this._x0,this._y0],[this._x1,this._y1]]},$v=function(t,n,e,r,i){this.node=t,this.x0=n,this.y0=e,this.x1=r,this.y1=i},Zv=function(t,n,e){var r,i,o,u,a,c,s,f=this._x0,l=this._y0,h=this._x1,p=this._y1,d=[],v=this._root;for(v&&d.push(new $v(v,f,l,h,p)),null==e?e=1/0:(f=t-e,l=n-e,h=t+e,p=n+e,e*=e);c=d.pop();)if(!(!(v=c.node)||(i=c.x0)>h||(o=c.y0)>p||(u=c.x1)=y)<<1|t>=g)&&(c=d[d.length-1],d[d.length-1]=d[d.length-1-s],d[d.length-1-s]=c)}else{var _=t-+this._x.call(null,v.data),m=n-+this._y.call(null,v.data),x=_*_+m*m;if(x=(a=(d+g)/2))?d=a:g=a,(f=u>=(c=(v+y)/2))?v=c:y=c,n=p,!(p=p[l=f<<1|s]))return this;if(!p.length)break;(n[l+1&3]||n[l+2&3]||n[l+3&3])&&(e=n,h=l)}for(;p.data!==t;)if(r=p,!(p=p.next))return this;return(i=p.next)&&delete p.next,r?(i?r.next=i:delete r.next,this):n?(i?n[l]=i:delete n[l],(p=n[0]||n[1]||n[2]||n[3])&&p===(n[3]||n[2]||n[1]||n[0])&&!p.length&&(e?e[h]=p:this._root=p),this):(this._root=i,this)},Qv=function(){return this._root},Jv=function(){var t=0;return this.visit(function(n){if(!n.length)do{++t}while(n=n.next)}),t},Kv=function(t){var n,e,r,i,o,u,a=[],c=this._root;for(c&&a.push(new $v(c,this._x0,this._y0,this._x1,this._y1));n=a.pop();)if(!t(c=n.node,r=n.x0,i=n.y0,o=n.x1,u=n.y1)&&c.length){var s=(r+o)/2,f=(i+u)/2;(e=c[3])&&a.push(new $v(e,s,f,o,u)),(e=c[2])&&a.push(new $v(e,r,f,s,u)),(e=c[1])&&a.push(new $v(e,s,i,o,f)),(e=c[0])&&a.push(new $v(e,r,i,s,f))}return this},tg=function(t){var n,e=[],r=[];for(this._root&&e.push(new $v(this._root,this._x0,this._y0,this._x1,this._y1));n=e.pop();){var i=n.node;if(i.length){var o,u=n.x0,a=n.y0,c=n.x1,s=n.y1,f=(u+c)/2,l=(a+s)/2;(o=i[0])&&e.push(new $v(o,u,a,f,l)),(o=i[1])&&e.push(new $v(o,f,a,c,l)),(o=i[2])&&e.push(new $v(o,u,l,f,s)),(o=i[3])&&e.push(new $v(o,f,l,c,s))}r.push(n)}for(;n=r.pop();)t(n.node,n.x0,n.y0,n.x1,n.y1);return this},ng=function(t){return arguments.length?(this._x=t,this):this._x},eg=function(t){return arguments.length?(this._y=t,this):this._y},rg=ur.prototype=ar.prototype;rg.copy=function(){var t,n,e=new ar(this._x,this._y,this._x0,this._y0,this._x1,this._y1),r=this._root;if(!r)return e;if(!r.length)return e._root=cr(r),e;for(t=[{source:r,target:e._root=new Array(4)}];r=t.pop();)for(var i=0;i<4;++i)(n=r.source[i])&&(n.length?t.push({source:n,target:r.target[i]=new Array(4)}):r.target[i]=cr(n));return e},rg.add=jv,rg.addAll=er,rg.cover=Xv,rg.data=Wv,rg.extent=Vv,rg.find=Zv,rg.remove=Gv,rg.removeAll=rr,rg.root=Qv,rg.size=Jv,rg.visit=Kv,rg.visitAfter=tg,rg.x=ng,rg.y=eg;var ig,og=function(t){function n(){function t(t,n,e,r,i){var o=t.data,a=t.r,p=l+a;{if(!o)return n>s+p||rf+p||ic.index){var d=s-o.x-o.vx,v=f-o.y-o.vy,g=d*d+v*v;gt.r&&(t.r=t[n].r)}function r(){if(i){var n,e,r=i.length;for(o=new Array(r),n=0;n1?(null==n?l.remove(t):l.set(t,i(n)),o):l.get(t)},find:function(n,e,r){var i,o,u,a,c,s=0,f=t.length;for(null==r?r=1/0:r*=r,s=0;s1?(p.on(t,n),o):p.on(t)}}},fg=function(){function t(t){var n,a=i.length,c=ur(i,pr,dr).visitAfter(e);for(u=t,n=0;n=f)){(t.data!==o||t.next)&&(0===i&&(i=Bv(),p+=i*i),0===c&&(c=Bv(),p+=c*c),p1?r[0]+r.slice(2):r,+t.slice(e+1)]},vg=function(t){return t=dg(Math.abs(t)),t?t[1]:NaN},gg=function(t,n){return function(e,r){for(var i=e.length,o=[],u=0,a=t[0],c=0;i>0&&a>0&&(c+a+1>r&&(a=Math.max(1,r-c)),o.push(e.substring(i-=a,i+a)),!((c+=a+1)>r));)a=t[u=(u+1)%t.length];return o.reverse().join(n)}},yg=function(t){return function(n){return n.replace(/[0-9]/g,function(n){return t[+n]})}},_g=function(t,n){t=t.toPrecision(n);t:for(var e,r=t.length,i=1,o=-1;i0&&(o=0)}return o>0?t.slice(0,o)+t.slice(e+1):t},mg=function(t,n){var e=dg(t,n);if(!e)return t+"";var r=e[0],i=e[1],o=i-(ig=3*Math.max(-8,Math.min(8,Math.floor(i/3))))+1,u=r.length;return o===u?r:o>u?r+new Array(o-u+1).join("0"):o>0?r.slice(0,o)+"."+r.slice(o):"0."+new Array(1-o).join("0")+dg(t,Math.max(0,n+o-1))[0]},xg=function(t,n){var e=dg(t,n);if(!e)return t+"";var r=e[0],i=e[1];return i<0?"0."+new Array(-i).join("0")+r:r.length>i+1?r.slice(0,i+1)+"."+r.slice(i+1):r+new Array(i-r.length+2).join("0")},bg={"":_g,"%":function(t,n){return(100*t).toFixed(n)},b:function(t){return Math.round(t).toString(2)},c:function(t){return t+""},d:function(t){return Math.round(t).toString(10)},e:function(t,n){return t.toExponential(n)},f:function(t,n){return t.toFixed(n)},g:function(t,n){return t.toPrecision(n)},o:function(t){return Math.round(t).toString(8)},p:function(t,n){return xg(100*t,n)},r:xg,s:mg,X:function(t){return Math.round(t).toString(16).toUpperCase()},x:function(t){return Math.round(t).toString(16)}},wg=/^(?:(.)?([<>=^]))?([+\-\( ])?([$#])?(0)?(\d+)?(,)?(\.\d+)?([a-z%])?$/i;vr.prototype=gr.prototype,gr.prototype.toString=function(){return this.fill+this.align+this.sign+this.symbol+(this.zero?"0":"")+(null==this.width?"":Math.max(1,0|this.width))+(this.comma?",":"")+(null==this.precision?"":"."+Math.max(0,0|this.precision))+this.type};var Mg,Tg=function(t){return t},Ng=["y","z","a","f","p","n","µ","m","","k","M","G","T","P","E","Z","Y"],kg=function(t){function n(t){function n(t){var n,i,a,f=g,x=y;if("c"===v)x=_(t)+x,t="";else{t=+t;var b=t<0;if(t=_(Math.abs(t),d),b&&0==+t&&(b=!1),f=(b?"("===s?s:"-":"-"===s||"("===s?"":s)+f,x=x+("s"===v?Ng[8+ig/3]:"")+(b&&"("===s?")":""),m)for(n=-1,i=t.length;++n(a=t.charCodeAt(n))||a>57){x=(46===a?o+t.slice(n+1):t.slice(n))+x,t=t.slice(0,n);break}}p&&!l&&(t=r(t,1/0));var w=f.length+t.length+x.length,M=w>1)+f+t+x+M.slice(w);break;default:t=M+f+t+x}return u(t)}t=vr(t);var e=t.fill,c=t.align,s=t.sign,f=t.symbol,l=t.zero,h=t.width,p=t.comma,d=t.precision,v=t.type,g="$"===f?i[0]:"#"===f&&/[boxX]/.test(v)?"0"+v.toLowerCase():"",y="$"===f?i[1]:/[%p]/.test(v)?a:"",_=bg[v],m=!v||/[defgprs%]/.test(v);return d=null==d?v?6:12:/[gprs]/.test(v)?Math.max(1,Math.min(21,d)):Math.max(0,Math.min(20,d)),n.toString=function(){return t+""},n}function e(t,e){var r=n((t=vr(t),t.type="f",t)),i=3*Math.max(-8,Math.min(8,Math.floor(vg(e)/3))),o=Math.pow(10,-i),u=Ng[8+i/3];return function(t){return r(o*t)+u}}var r=t.grouping&&t.thousands?gg(t.grouping,t.thousands):Tg,i=t.currency,o=t.decimal,u=t.numerals?yg(t.numerals):Tg,a=t.percent||"%";return{format:n,formatPrefix:e}};yr({decimal:".",thousands:",",grouping:[3],currency:["$",""]});var Sg=function(t){return Math.max(0,-vg(Math.abs(t)))},Ag=function(t,n){return Math.max(0,3*Math.max(-8,Math.min(8,Math.floor(vg(n)/3)))-vg(Math.abs(t)))},Eg=function(t,n){return t=Math.abs(t),n=Math.abs(n)-t,Math.max(0,vg(n)-vg(t))+1},Cg=function(){return new _r};_r.prototype={constructor:_r,reset:function(){this.s=this.t=0},add:function(t){mr(cy,t,this.t),mr(this,cy.s,this.s),this.s?this.t+=cy.t:this.s=cy.t},valueOf:function(){return this.s}};var zg,Pg,Rg,Lg,Dg,qg,Ug,Og,Fg,Yg,Ig,Hg,Bg,jg,Xg,Wg,Vg,$g,Zg,Gg,Qg,Jg,Kg,ty,ny,ey,ry,iy,oy,uy,ay,cy=new _r,sy=1e-6,fy=Math.PI,ly=fy/2,hy=fy/4,py=2*fy,dy=180/fy,vy=fy/180,gy=Math.abs,yy=Math.atan,_y=Math.atan2,my=Math.cos,xy=Math.ceil,by=Math.exp,wy=Math.log,My=Math.pow,Ty=Math.sin,Ny=Math.sign||function(t){return t>0?1:t<0?-1:0},ky=Math.sqrt,Sy=Math.tan,Ay={Feature:function(t,n){Tr(t.geometry,n)},FeatureCollection:function(t,n){for(var e=t.features,r=-1,i=e.length;++rsy?Fg=90:Dy<-sy&&(Ug=-90),Xg[0]=qg,Xg[1]=Og}},Uy=function(t){var n,e,r,i,o,u,a;if(Fg=Og=-(qg=Ug=1/0),jg=[],Cy(t,qy),e=jg.length){for(jg.sort(Wr),n=1,r=jg[0],o=[r];nXr(r[0],r[1])&&(r[1]=i[1]),Xr(i[0],r[1])>Xr(r[0],r[1])&&(r[0]=i[0])):o.push(r=i);for(u=-1/0,e=o.length-1,n=0,r=o[e];n<=e;r=i,++n)i=o[n],(a=Xr(r[1],i[0]))>u&&(u=a,qg=i[0],Og=r[1])}return jg=Xg=null,qg===1/0||Ug===1/0?[[NaN,NaN],[NaN,NaN]]:[[qg,Ug],[Og,Fg]]},Oy={sphere:Mr,point:$r,lineStart:Gr,lineEnd:Kr,polygonStart:function(){Oy.lineStart=ti,Oy.lineEnd=ni},polygonEnd:function(){Oy.lineStart=Gr,Oy.lineEnd=Kr}},Fy=function(t){Wg=Vg=$g=Zg=Gg=Qg=Jg=Kg=ty=ny=ey=0,Cy(t,Oy);var n=ty,e=ny,r=ey,i=n*n+e*e+r*r;return i<1e-12&&(n=Qg,e=Jg,r=Kg,Vg2?t[2]*vy:0),n.invert=function(n){return n=t.invert(n[0]*vy,n[1]*vy),n[0]*=dy,n[1]*=dy,n},n},t_=function(){function t(t,n){e.push(t=r(t,n)),t[0]*=dy,t[1]*=dy}function n(){var t=i.apply(this,arguments),n=o.apply(this,arguments)*vy,c=u.apply(this,arguments)*vy;return e=[],r=oi(-t[0]*vy,-t[1]*vy,0).invert,si(a,n,c,1),t={type:"Polygon",coordinates:[e]},e=r=null,t}var e,r,i=Yy([0,0]),o=Yy(90),u=Yy(6),a={point:t};return n.center=function(t){return arguments.length?(i="function"==typeof t?t:Yy([+t[0],+t[1]]),n):i},n.radius=function(t){return arguments.length?(o="function"==typeof t?t:Yy(+t),n):o},n.precision=function(t){return arguments.length?(u="function"==typeof t?t:Yy(+t),n):u},n},n_=function(){var t,n=[];return{point:function(n,e){t.push([n,e])},lineStart:function(){n.push(t=[])},lineEnd:Mr,rejoin:function(){n.length>1&&n.push(n.pop().concat(n.shift()))},result:function(){var e=n;return n=[],t=null,e}}},e_=function(t,n){return gy(t[0]-n[0])=0;--o)i.point((f=s[o])[0],f[1]);else r(h.x,h.p.x,-1,i);h=h.p}h=h.o,s=h.z,p=!p}while(!h.v);i.lineEnd()}}},i_=Cg(),o_=function(t,n){var e=n[0],r=n[1],i=[Ty(e),-my(e),0],o=0,u=0;i_.reset();for(var a=0,c=t.length;a=0?1:-1,T=M*w,N=T>fy,k=d*x;if(i_.add(_y(k*M*Ty(T),v*b+k*my(T))),o+=N?w+M*py:w,N^h>=e^_>=e){var S=Lr(Pr(l),Pr(y));Ur(S);var A=Lr(i,S);Ur(A);var E=(N^w>=0?-1:1)*br(A[2]);(r>E||r===E&&(S[0]||S[1]))&&(u+=N^w>=0?1:-1)}}return(o<-sy||o0){for(_||(i.polygonStart(),_=!0),i.lineStart(),t=0;t1&&2&o&&u.push(u.pop().concat(u.shift())),p.push(u.filter(pi))}var h,p,d,v=n(i),g=n_(),y=n(g),_=!1,m={point:o,lineStart:a,lineEnd:c,polygonStart:function(){m.point=s,m.lineStart=f,m.lineEnd=l,p=[],h=[]},polygonEnd:function(){m.point=o,m.lineStart=a,m.lineEnd=c,p=Kf(p);var t=o_(h,r);p.length?(_||(i.polygonStart(),_=!0),r_(p,di,t,e,i)):t&&(_||(i.polygonStart(),_=!0),i.lineStart(),e(null,null,1,i),i.lineEnd()),_&&(i.polygonEnd(),_=!1),p=h=null},sphere:function(){i.polygonStart(),i.lineStart(),e(null,null,1,i),i.lineEnd(),i.polygonEnd()}};return m}},a_=u_(function(){return!0},vi,yi,[-fy,-ly]),c_=function(t){function n(n,e,r,i){si(i,t,a,r,n,e)}function e(t,n){return my(t)*my(n)>u}function r(t){var n,r,u,a,f;return{lineStart:function(){a=u=!1,f=1},point:function(l,h){var p,d=[l,h],v=e(l,h),g=c?v?0:o(l,h):v?o(l+(l<0?fy:-fy),h):0;if(!n&&(a=u=v)&&t.lineStart(),v!==u&&(!(p=i(n,d))||e_(n,p)||e_(d,p))&&(d[0]+=sy,d[1]+=sy,v=e(d[0],d[1])),v!==u)f=0,v?(t.lineStart(),p=i(d,n),t.point(p[0],p[1])):(p=i(n,d),t.point(p[0],p[1]),t.lineEnd()),n=p;else if(s&&n&&c^v){var y;g&r||!(y=i(d,n,!0))||(f=0,c?(t.lineStart(),t.point(y[0][0],y[0][1]),t.point(y[1][0],y[1][1]),t.lineEnd()):(t.point(y[1][0],y[1][1]),t.lineEnd(),t.lineStart(),t.point(y[0][0],y[0][1])))}!v||n&&e_(n,d)||t.point(d[0],d[1]),n=d,u=v,r=g},lineEnd:function(){u&&t.lineEnd(),n=null},clean:function(){return f|(a&&u)<<1}}}function i(t,n,e){var r=Pr(t),i=Pr(n),o=[1,0,0],a=Lr(r,i),c=Rr(a,a),s=a[0],f=c-s*s;if(!f)return!e&&t;var l=u*c/f,h=-u*s/f,p=Lr(o,a),d=qr(o,l);Dr(d,qr(a,h));var v=p,g=Rr(d,v),y=Rr(v,v),_=g*g-y*(Rr(d,d)-1);if(!(_<0)){var m=ky(_),x=qr(v,(-g-m)/y);if(Dr(x,d),x=zr(x),!e)return x;var b,w=t[0],M=n[0],T=t[1],N=n[1];M0^x[1]<(gy(x[0]-w)fy^(w<=x[0]&&x[0]<=M)){var E=qr(v,(-g+m)/y);return Dr(E,d),[x,zr(E)]}}}function o(n,e){var r=c?t:fy-t,i=0;return n<-r?i|=1:n>r&&(i|=2),e<-r?i|=4:e>r&&(i|=8),i}var u=my(t),a=6*vy,c=u>0,s=gy(u)>sy;return u_(e,r,n,c?[0,-t]:[-fy,t-fy])},s_=function(t,n,e,r,i,o){var u,a=t[0],c=t[1],s=n[0],f=n[1],l=0,h=1,p=s-a,d=f-c;if(u=e-a,p||!(u>0)){if(u/=p,p<0){if(u0){if(u>h)return;u>l&&(l=u)}if(u=i-a,p||!(u<0)){if(u/=p,p<0){if(u>h)return;u>l&&(l=u)}else if(p>0){if(u0)){if(u/=d,d<0){if(u0){if(u>h)return;u>l&&(l=u)}if(u=o-c,d||!(u<0)){if(u/=d,d<0){if(u>h)return;u>l&&(l=u)}else if(d>0){if(u0&&(t[0]=a+l*p,t[1]=c+l*d),h<1&&(n[0]=a+h*p,n[1]=c+h*d),!0}}}}},f_=1e9,l_=-f_,h_=function(){var t,n,e,r=0,i=0,o=960,u=500;return e={stream:function(e){return t&&n===e?t:t=_i(r,i,o,u)(n=e)},extent:function(a){return arguments.length?(r=+a[0][0],i=+a[0][1],o=+a[1][0],u=+a[1][1],t=n=null,e):[[r,i],[o,u]]}}},p_=Cg(),d_={sphere:Mr,point:Mr,lineStart:mi,lineEnd:Mr,polygonStart:Mr,polygonEnd:Mr},v_=function(t){return p_.reset(),Cy(t,d_),+p_},g_=[null,null],y_={type:"LineString",coordinates:g_},__=function(t,n){return g_[0]=t,g_[1]=n,v_(y_)},m_={Feature:function(t,n){return Mi(t.geometry,n)},FeatureCollection:function(t,n){for(var e=t.features,r=-1,i=e.length;++r=.12&&i<.234&&r>=-.425&&r<-.214?s:i>=.166&&i<.234&&r>=-.214&&r<-.115?f:c).invert(t)},t.stream=function(t){return e&&r===t?e:e=po([c.stream(r=t),s.stream(t),f.stream(t)])},t.precision=function(t){return arguments.length?(c.precision(t),s.precision(t),f.precision(t),n()):c.precision()},t.scale=function(n){return arguments.length?(c.scale(n),s.scale(.35*n),f.scale(n),t.translate(c.translate())):c.scale()},t.translate=function(t){if(!arguments.length)return c.translate();var e=c.scale(),r=+t[0],a=+t[1];return i=c.translate(t).clipExtent([[r-.455*e,a-.238*e],[r+.455*e,a+.238*e]]).stream(l),o=s.translate([r-.307*e,a+.201*e]).clipExtent([[r-.425*e+sy,a+.12*e+sy],[r-.214*e-sy,a+.234*e-sy]]).stream(l),u=f.translate([r-.205*e,a+.212*e]).clipExtent([[r-.214*e+sy,a+.166*e+sy],[r-.115*e-sy,a+.234*e-sy]]).stream(l),n()},t.fitExtent=function(n,e){return no(t,n,e)},t.fitSize=function(n,e){return eo(t,n,e)},t.fitWidth=function(n,e){return ro(t,n,e)},t.fitHeight=function(n,e){return io(t,n,e)},t.scale(1070)},im=vo(function(t){return ky(2/(1+t))});im.invert=go(function(t){return 2*br(t/2)});var om=function(){return co(im).scale(124.75).clipAngle(179.999)},um=vo(function(t){return(t=xr(t))&&t/Ty(t)});um.invert=go(function(t){return t});var am=function(){return co(um).scale(79.4188).clipAngle(179.999)};yo.invert=function(t,n){return[t,2*yy(by(n))-ly]};var cm=function(){return _o(yo).scale(961/py)},sm=function(){return fo(xo).scale(109.5).parallels([30,30])};bo.invert=bo;var fm=function(){return co(bo).scale(152.63)},lm=function(){return fo(wo).scale(131.154).center([0,13.9389])};Mo.invert=go(yy);var hm=function(){return co(Mo).scale(144.049).clipAngle(60)},pm=function(){function t(){return i=o=null,u}var n,e,r,i,o,u,a=1,c=0,s=0,f=1,l=1,h=M_,p=null,d=M_;return u={stream:function(t){return i&&o===t?i:i=h(d(o=t))},postclip:function(i){return arguments.length?(d=i,p=n=e=r=null,t()):d},clipExtent:function(i){return arguments.length?(d=null==i?(p=n=e=r=null,M_):_i(p=+i[0][0],n=+i[0][1],e=+i[1][0],r=+i[1][1]),t()):null==p?null:[[p,n],[e,r]]},scale:function(n){return arguments.length?(h=To((a=+n)*f,a*l,c,s),t()):a},translate:function(n){return arguments.length?(h=To(a*f,a*l,c=+n[0],s=+n[1]),t()):[c,s]},reflectX:function(n){return arguments.length?(h=To(a*(f=n?-1:1),a*l,c,s),t()):f<0},reflectY:function(n){return arguments.length?(h=To(a*f,a*(l=n?-1:1),c,s),t()):l<0},fitExtent:function(t,n){return no(u,t,n)},fitSize:function(t,n){return eo(u,t,n)},fitWidth:function(t,n){return ro(u,t,n)},fitHeight:function(t,n){return io(u,t,n)}}};No.invert=function(t,n){ -var e,r=n,i=25;do{var o=r*r,u=o*o;r-=e=(r*(1.007226+o*(.015085+u*(.028874*o-.044475-.005916*u)))-n)/(1.007226+o*(.045255+u*(.259866*o-.311325-.005916*11*u)))}while(gy(e)>sy&&--i>0);return[t/(.8707+(o=r*r)*(o*(o*o*o*(.003971-.001529*o)-.013791)-.131979)),r]};var dm=function(){return co(No).scale(175.295)};ko.invert=go(br);var vm=function(){return co(ko).scale(249.5).clipAngle(90+sy)};So.invert=go(function(t){return 2*yy(t)});var gm=function(){return co(So).scale(250).clipAngle(142)};Ao.invert=function(t,n){return[-n,2*yy(by(t))-ly]};var ym=function(){var t=_o(Ao),n=t.center,e=t.rotate;return t.center=function(t){return arguments.length?n([-t[1],t[0]]):(t=n(),[t[1],-t[0]])},t.rotate=function(t){return arguments.length?e([t[0],t[1],t.length>2?t[2]+90:90]):(t=e(),[t[0],t[1],t[2]-90])},e([0,0,90]).scale(159.155)},_m=function(){function t(t){var o,u=0;t.eachAfter(function(t){var e=t.children;e?(t.x=Co(e),t.y=Po(e)):(t.x=o?u+=n(t,o):0,t.y=0,o=t)});var a=Lo(t),c=Do(t),s=a.x-n(a,c)/2,f=c.x+n(c,a)/2;return t.eachAfter(i?function(n){n.x=(n.x-t.x)*e,n.y=(t.y-n.y)*r}:function(n){n.x=(n.x-s)/(f-s)*e,n.y=(1-(t.y?n.y/t.y:1))*r})}var n=Eo,e=1,r=1,i=!1;return t.separation=function(e){return arguments.length?(n=e,t):n},t.size=function(n){return arguments.length?(i=!1,e=+n[0],r=+n[1],t):i?null:[e,r]},t.nodeSize=function(n){return arguments.length?(i=!0,e=+n[0],r=+n[1],t):i?[e,r]:null},t},mm=function(){return this.eachAfter(qo)},xm=function(t){var n,e,r,i,o=this,u=[o];do{for(n=u.reverse(),u=[];o=n.pop();)if(t(o),e=o.children)for(r=0,i=e.length;r=0;--e)i.push(n[e]);return this},wm=function(t){for(var n,e,r,i=this,o=[i],u=[];i=o.pop();)if(u.push(i),n=i.children)for(e=0,r=n.length;e=0;)e+=r[i].value;n.value=e})},Tm=function(t){return this.eachBefore(function(n){n.children&&n.children.sort(t)})},Nm=function(t){for(var n=this,e=Uo(n,t),r=[n];n!==e;)n=n.parent,r.push(n);for(var i=r.length;t!==e;)r.splice(i,0,t),t=t.parent;return r},km=function(){for(var t=this,n=[t];t=t.parent;)n.push(t);return n},Sm=function(){var t=[];return this.each(function(n){t.push(n)}),t},Am=function(){var t=[];return this.eachBefore(function(n){n.children||t.push(n)}),t},Em=function(){var t=this,n=[];return t.each(function(e){e!==t&&n.push({source:e.parent,target:e})}),n};Bo.prototype=Oo.prototype={constructor:Bo,count:mm,each:xm,eachAfter:wm,eachBefore:bm,sum:Mm,sort:Tm,path:Nm,ancestors:km,descendants:Sm,leaves:Am,links:Em,copy:Fo};var Cm=Array.prototype.slice,zm=function(t){for(var n,e,r=0,i=(t=jo(Cm.call(t))).length,o=[];r0)throw new Error("cycle");return o}var n=lu,e=hu;return t.id=function(e){return arguments.length?(n=ou(e),t):n},t.parentId=function(n){return arguments.length?(e=ou(n),t):e},t};mu.prototype=Object.create(Bo.prototype);var Hm=function(){function t(t){var r=xu(t);if(r.eachAfter(n),r.parent.m=-r.z,r.eachBefore(e),c)t.eachBefore(i);else{var s=t,f=t,l=t;t.eachBefore(function(t){t.xf.x&&(f=t),t.depth>l.depth&&(l=t)});var h=s===f?1:o(s,f)/2,p=h-s.x,d=u/(f.x+h+p),v=a/(l.depth||1);t.eachBefore(function(t){t.x=(t.x+p)*d,t.y=t.depth*v})}return t}function n(t){var n=t.children,e=t.parent.children,i=t.i?e[t.i-1]:null;if(n){yu(t);var u=(n[0].z+n[n.length-1].z)/2;i?(t.z=i.z+o(t._,i._),t.m=t.z-u):t.z=u}else i&&(t.z=i.z+o(t._,i._));t.parent.A=r(t,i,t.parent.A||e[0])}function e(t){t._.x=t.z+t.parent.m,t.m+=t.parent.m}function r(t,n,e){if(n){for(var r,i=t,u=t,a=n,c=i.parent.children[0],s=i.m,f=u.m,l=a.m,h=c.m;a=vu(a),i=du(i),a&&i;)c=du(c),u=vu(u),u.a=t,r=a.z+l-i.z-s+o(a._,i._),r>0&&(gu(_u(a,t,e),t,r),s+=r,f+=r),l+=a.m,s+=i.m,h+=c.m,f+=u.m;a&&!vu(u)&&(u.t=a,u.m+=l-f),i&&!du(c)&&(c.t=i,c.m+=s-h,e=t)}return e}function i(t){t.x*=u,t.y=t.depth*a}var o=pu,u=1,a=1,c=null;return t.separation=function(n){return arguments.length?(o=n,t):o},t.size=function(n){return arguments.length?(c=!1,u=+n[0],a=+n[1],t):c?null:[u,a]},t.nodeSize=function(n){return arguments.length?(c=!0,u=+n[0],a=+n[1],t):c?[u,a]:null},t},Bm=function(t,n,e,r,i){for(var o,u=t.children,a=-1,c=u.length,s=t.value&&(i-e)/t.value;++a1?n:1)},e}(jm),Wm=function(){function t(t){return t.x0=t.y0=0,t.x1=i,t.y1=o,t.eachBefore(n),u=[0],r&&t.eachBefore(Dm),t}function n(t){var n=u[t.depth],r=t.x0+n,i=t.y0+n,o=t.x1-n,h=t.y1-n;o=n-1){var s=c[t];return s.x0=r,s.y0=i,s.x1=u,s.y1=a,void 0}for(var l=f[t],h=e/2+l,p=t+1,d=n-1;p>>1;f[v]a-i){var _=(r*y+u*g)/e;o(t,p,g,r,i,_,a),o(p,n,y,_,i,u,a)}else{var m=(i*y+a*g)/e;o(t,p,g,r,i,u,m),o(p,n,y,r,m,u,a)}}var u,a,c=t.children,s=c.length,f=new Array(s+1);for(f[0]=a=u=0;u1?n:1)},e}(jm),Gm=function(t){for(var n,e=-1,r=t.length,i=t[r-1],o=0;++e=0;--n)s.push(t[r[o[n]][2]]);for(n=+a;na!=s>a&&u<(c-e)*(a-r)/(s-r)+e&&(f=!f),c=e,s=r;return f},nx=function(t){for(var n,e,r=-1,i=t.length,o=t[i-1],u=o[0],a=o[1],c=0;++r1);return t+e*o*Math.sqrt(-2*Math.log(i)/i)}}return e.source=t,e}(ix),ax=function t(n){function e(){var t=ux.source(n).apply(this,arguments);return function(){return Math.exp(t())}}return e.source=t,e}(ix),cx=function t(n){function e(t){return function(){for(var e=0,r=0;r=200&&e<300||304===e){if(o)try{n=o.call(r,s)}catch(t){return void a.call("error",r,t)}else n=s;a.call("load",r,n)}else a.call("error",r,t)}var r,i,o,u,a=g("beforesend","progress","load","error"),c=Xe(),s=new XMLHttpRequest,f=null,l=null,h=0;if("undefined"==typeof XDomainRequest||"withCredentials"in s||!/^(http(s)?:)?\/\//.test(t)||(s=new XDomainRequest),"onload"in s?s.onload=s.onerror=s.ontimeout=e:s.onreadystatechange=function(t){s.readyState>3&&e(t)},s.onprogress=function(t){a.call("progress",r,t)},r={header:function(t,n){return t=(t+"").toLowerCase(),arguments.length<2?c.get(t):(null==n?c.remove(t):c.set(t,n+""),r)},mimeType:function(t){return arguments.length?(i=null==t?null:t+"",r):i},responseType:function(t){return arguments.length?(u=t,r):u},timeout:function(t){return arguments.length?(h=+t,r):h},user:function(t){return arguments.length<1?f:(f=null==t?null:t+"",r)},password:function(t){return arguments.length<1?l:(l=null==t?null:t+"",r)},response:function(t){return o=t,r},get:function(t,n){return r.send("GET",t,n)},post:function(t,n){return r.send("POST",t,n)},send:function(n,e,o){return s.open(n,t,!0,f,l),null==i||c.has("accept")||c.set("accept",i+",*/*"),s.setRequestHeader&&c.each(function(t,n){s.setRequestHeader(n,t)}),null!=i&&s.overrideMimeType&&s.overrideMimeType(i),null!=u&&(s.responseType=u),h>0&&(s.timeout=h),null==o&&"function"==typeof e&&(o=e,e=null),null!=o&&1===o.length&&(o=zu(o)),null!=o&&r.on("error",o).on("load",function(t){o(null,t)}),a.call("beforesend",r,s),s.send(null==e?null:e),r},abort:function(){return s.abort(),r},on:function(){var t=a.on.apply(a,arguments);return t===a?r:t}},null!=n){if("function"!=typeof n)throw new Error("invalid callback: "+n);return r.get(n)}return r},hx=function(t,n){return function(e,r){var i=lx(e).mimeType(t).response(n);if(null!=r){if("function"!=typeof r)throw new Error("invalid callback: "+r);return i.get(r)}return i}},px=hx("text/html",function(t){return document.createRange().createContextualFragment(t.responseText)}),dx=hx("application/json",function(t){return JSON.parse(t.responseText)}),vx=hx("text/plain",function(t){return t.responseText}),gx=hx("application/xml",function(t){var n=t.responseXML;if(!n)throw new Error("parse error");return n}),yx=function(t,n){return function(e,r,i){arguments.length<3&&(i=r,r=null);var o=lx(e).mimeType(t);return o.row=function(t){return arguments.length?o.response(Ru(n,r=t)):r},o.row(r),i?o.get(i):o}},_x=yx("text/csv",Pv),mx=yx("text/tab-separated-values",Uv),xx=Array.prototype,bx=xx.map,wx=xx.slice,Mx={name:"implicit"},Tx=function(t){return function(){return t}},Nx=function(t){return+t},kx=[0,1],Sx=function(n,e,r){var o,u=n[0],a=n[n.length-1],c=i(u,a,null==e?10:e);switch(r=vr(null==r?",f":r),r.type){case"s":var s=Math.max(Math.abs(u),Math.abs(a));return null!=r.precision||isNaN(o=Ag(c,s))||(r.precision=o),t.formatPrefix(r,s);case"":case"e":case"g":case"p":case"r":null!=r.precision||isNaN(o=Eg(c,Math.max(Math.abs(u),Math.abs(a))))||(r.precision=o-("e"===r.type));break;case"f":case"%":null!=r.precision||isNaN(o=Sg(c))||(r.precision=o-2*("%"===r.type))}return t.format(r)},Ax=function(t,n){t=t.slice();var e,r=0,i=t.length-1,o=t[r],u=t[i];return u0?t>1?aa(function(n){n.setTime(Math.floor(n/t)*t)},function(n,e){n.setTime(+n+e*t)},function(n,e){return(e-n)/t}):zx:null};var Px=zx.range,Rx=6e4,Lx=6048e5,Dx=aa(function(t){t.setTime(1e3*Math.floor(t/1e3))},function(t,n){t.setTime(+t+1e3*n)},function(t,n){return(n-t)/1e3},function(t){return t.getUTCSeconds()}),qx=Dx.range,Ux=aa(function(t){t.setTime(Math.floor(t/Rx)*Rx)},function(t,n){t.setTime(+t+n*Rx)},function(t,n){return(n-t)/Rx},function(t){return t.getMinutes()}),Ox=Ux.range,Fx=aa(function(t){var n=t.getTimezoneOffset()*Rx%36e5;n<0&&(n+=36e5),t.setTime(36e5*Math.floor((+t-n)/36e5)+n)},function(t,n){t.setTime(+t+36e5*n)},function(t,n){return(n-t)/36e5},function(t){return t.getHours()}),Yx=Fx.range,Ix=aa(function(t){t.setHours(0,0,0,0)},function(t,n){t.setDate(t.getDate()+n)},function(t,n){return(n-t-(n.getTimezoneOffset()-t.getTimezoneOffset())*Rx)/864e5},function(t){return t.getDate()-1}),Hx=Ix.range,Bx=ca(0),jx=ca(1),Xx=ca(2),Wx=ca(3),Vx=ca(4),$x=ca(5),Zx=ca(6),Gx=Bx.range,Qx=jx.range,Jx=Xx.range,Kx=Wx.range,tb=Vx.range,nb=$x.range,eb=Zx.range,rb=aa(function(t){t.setDate(1),t.setHours(0,0,0,0)},function(t,n){t.setMonth(t.getMonth()+n)},function(t,n){return n.getMonth()-t.getMonth()+12*(n.getFullYear()-t.getFullYear())},function(t){return t.getMonth()}),ib=rb.range,ob=aa(function(t){t.setMonth(0,1),t.setHours(0,0,0,0)},function(t,n){t.setFullYear(t.getFullYear()+n)},function(t,n){return n.getFullYear()-t.getFullYear()},function(t){return t.getFullYear()});ob.every=function(t){return isFinite(t=Math.floor(t))&&t>0?aa(function(n){n.setFullYear(Math.floor(n.getFullYear()/t)*t),n.setMonth(0,1),n.setHours(0,0,0,0)},function(n,e){n.setFullYear(n.getFullYear()+e*t)}):null};var ub=ob.range,ab=aa(function(t){t.setUTCSeconds(0,0)},function(t,n){t.setTime(+t+n*Rx)},function(t,n){return(n-t)/Rx},function(t){return t.getUTCMinutes()}),cb=ab.range,sb=aa(function(t){t.setUTCMinutes(0,0,0)},function(t,n){t.setTime(+t+36e5*n)},function(t,n){return(n-t)/36e5},function(t){return t.getUTCHours()}),fb=sb.range,lb=aa(function(t){t.setUTCHours(0,0,0,0)},function(t,n){t.setUTCDate(t.getUTCDate()+n)},function(t,n){return(n-t)/864e5},function(t){return t.getUTCDate()-1}),hb=lb.range,pb=sa(0),db=sa(1),vb=sa(2),gb=sa(3),yb=sa(4),_b=sa(5),mb=sa(6),xb=pb.range,bb=db.range,wb=vb.range,Mb=gb.range,Tb=yb.range,Nb=_b.range,kb=mb.range,Sb=aa(function(t){t.setUTCDate(1),t.setUTCHours(0,0,0,0)},function(t,n){t.setUTCMonth(t.getUTCMonth()+n)},function(t,n){return n.getUTCMonth()-t.getUTCMonth()+12*(n.getUTCFullYear()-t.getUTCFullYear())},function(t){return t.getUTCMonth()}),Ab=Sb.range,Eb=aa(function(t){t.setUTCMonth(0,1),t.setUTCHours(0,0,0,0)},function(t,n){t.setUTCFullYear(t.getUTCFullYear()+n)},function(t,n){return n.getUTCFullYear()-t.getUTCFullYear()},function(t){return t.getUTCFullYear()});Eb.every=function(t){return isFinite(t=Math.floor(t))&&t>0?aa(function(n){n.setUTCFullYear(Math.floor(n.getUTCFullYear()/t)*t),n.setUTCMonth(0,1),n.setUTCHours(0,0,0,0)},function(n,e){n.setUTCFullYear(n.getUTCFullYear()+e*t)}):null};var Cb,zb=Eb.range,Pb={"-":"",_:" ",0:"0"},Rb=/^\s*\d+/,Lb=/^%/,Db=/[\\^$*+?|[\]().{}]/g;xc({dateTime:"%x, %X",date:"%-m/%-d/%Y",time:"%-I:%M:%S %p",periods:["AM","PM"],days:["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"],shortDays:["Sun","Mon","Tue","Wed","Thu","Fri","Sat"],months:["January","February","March","April","May","June","July","August","September","October","November","December"],shortMonths:["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"]});var qb=Date.prototype.toISOString?bc:t.utcFormat("%Y-%m-%dT%H:%M:%S.%LZ"),Ub=+new Date("2000-01-01T00:00:00.000Z")?wc:t.utcParse("%Y-%m-%dT%H:%M:%S.%LZ"),Ob=1e3,Fb=60*Ob,Yb=60*Fb,Ib=24*Yb,Hb=7*Ib,Bb=30*Ib,jb=365*Ib,Xb=function(){return Nc(ob,rb,Bx,Ix,Fx,Ux,Dx,zx,t.timeFormat).domain([new Date(2e3,0,1),new Date(2e3,0,2)])},Wb=function(){return Nc(Eb,Sb,pb,lb,sb,ab,Dx,zx,t.utcFormat).domain([Date.UTC(2e3,0,1),Date.UTC(2e3,0,2)])},Vb=function(t){return t.match(/.{6}/g).map(function(t){return"#"+t})},$b=Vb("1f77b4ff7f0e2ca02cd627289467bd8c564be377c27f7f7fbcbd2217becf"),Zb=Vb("393b795254a36b6ecf9c9ede6379398ca252b5cf6bcedb9c8c6d31bd9e39e7ba52e7cb94843c39ad494ad6616be7969c7b4173a55194ce6dbdde9ed6"),Gb=Vb("3182bd6baed69ecae1c6dbefe6550dfd8d3cfdae6bfdd0a231a35474c476a1d99bc7e9c0756bb19e9ac8bcbddcdadaeb636363969696bdbdbdd9d9d9"),Qb=Vb("1f77b4aec7e8ff7f0effbb782ca02c98df8ad62728ff98969467bdc5b0d58c564bc49c94e377c2f7b6d27f7f7fc7c7c7bcbd22dbdb8d17becf9edae5"),Jb=Sp(Gt(300,.5,0),Gt(-240,.5,1)),Kb=Sp(Gt(-100,.75,.35),Gt(80,1.5,.8)),tw=Sp(Gt(260,.75,.35),Gt(80,1.5,.8)),nw=Gt(),ew=function(t){(t<0||t>1)&&(t-=Math.floor(t));var n=Math.abs(t-.5);return nw.h=360*t-100,nw.s=1.5-1.5*n,nw.l=.8-.9*n,nw+""},rw=kc(Vb("44015444025645045745055946075a46085c460a5d460b5e470d60470e6147106347116447136548146748166848176948186a481a6c481b6d481c6e481d6f481f70482071482173482374482475482576482677482878482979472a7a472c7a472d7b472e7c472f7d46307e46327e46337f463480453581453781453882443983443a83443b84433d84433e85423f854240864241864142874144874045884046883f47883f48893e49893e4a893e4c8a3d4d8a3d4e8a3c4f8a3c508b3b518b3b528b3a538b3a548c39558c39568c38588c38598c375a8c375b8d365c8d365d8d355e8d355f8d34608d34618d33628d33638d32648e32658e31668e31678e31688e30698e306a8e2f6b8e2f6c8e2e6d8e2e6e8e2e6f8e2d708e2d718e2c718e2c728e2c738e2b748e2b758e2a768e2a778e2a788e29798e297a8e297b8e287c8e287d8e277e8e277f8e27808e26818e26828e26828e25838e25848e25858e24868e24878e23888e23898e238a8d228b8d228c8d228d8d218e8d218f8d21908d21918c20928c20928c20938c1f948c1f958b1f968b1f978b1f988b1f998a1f9a8a1e9b8a1e9c891e9d891f9e891f9f881fa0881fa1881fa1871fa28720a38620a48621a58521a68522a78522a88423a98324aa8325ab8225ac8226ad8127ad8128ae8029af7f2ab07f2cb17e2db27d2eb37c2fb47c31b57b32b67a34b67935b77937b87838b9773aba763bbb753dbc743fbc7340bd7242be7144bf7046c06f48c16e4ac16d4cc26c4ec36b50c46a52c56954c56856c66758c7655ac8645cc8635ec96260ca6063cb5f65cb5e67cc5c69cd5b6ccd5a6ece5870cf5773d05675d05477d1537ad1517cd2507fd34e81d34d84d44b86d54989d5488bd6468ed64590d74393d74195d84098d83e9bd93c9dd93ba0da39a2da37a5db36a8db34aadc32addc30b0dd2fb2dd2db5de2bb8de29bade28bddf26c0df25c2df23c5e021c8e020cae11fcde11dd0e11cd2e21bd5e21ad8e219dae319dde318dfe318e2e418e5e419e7e419eae51aece51befe51cf1e51df4e61ef6e620f8e621fbe723fde725")),iw=kc(Vb("00000401000501010601010802010902020b02020d03030f03031204041405041606051806051a07061c08071e0907200a08220b09240c09260d0a290e0b2b100b2d110c2f120d31130d34140e36150e38160f3b180f3d19103f1a10421c10441d11471e114920114b21114e22115024125325125527125829115a2a115c2c115f2d11612f116331116533106734106936106b38106c390f6e3b0f703d0f713f0f72400f74420f75440f764510774710784910784a10794c117a4e117b4f127b51127c52137c54137d56147d57157e59157e5a167e5c167f5d177f5f187f601880621980641a80651a80671b80681c816a1c816b1d816d1d816e1e81701f81721f817320817521817621817822817922827b23827c23827e24828025828125818326818426818627818827818928818b29818c29818e2a81902a81912b81932b80942c80962c80982d80992d809b2e7f9c2e7f9e2f7fa02f7fa1307ea3307ea5317ea6317da8327daa337dab337cad347cae347bb0357bb2357bb3367ab5367ab73779b83779ba3878bc3978bd3977bf3a77c03a76c23b75c43c75c53c74c73d73c83e73ca3e72cc3f71cd4071cf4070d0416fd2426fd3436ed5446dd6456cd8456cd9466bdb476adc4869de4968df4a68e04c67e24d66e34e65e44f64e55064e75263e85362e95462ea5661eb5760ec5860ed5a5fee5b5eef5d5ef05f5ef1605df2625df2645cf3655cf4675cf4695cf56b5cf66c5cf66e5cf7705cf7725cf8745cf8765cf9785df9795df97b5dfa7d5efa7f5efa815ffb835ffb8560fb8761fc8961fc8a62fc8c63fc8e64fc9065fd9266fd9467fd9668fd9869fd9a6afd9b6bfe9d6cfe9f6dfea16efea36ffea571fea772fea973feaa74feac76feae77feb078feb27afeb47bfeb67cfeb77efeb97ffebb81febd82febf84fec185fec287fec488fec68afec88cfeca8dfecc8ffecd90fecf92fed194fed395fed597fed799fed89afdda9cfddc9efddea0fde0a1fde2a3fde3a5fde5a7fde7a9fde9aafdebacfcecaefceeb0fcf0b2fcf2b4fcf4b6fcf6b8fcf7b9fcf9bbfcfbbdfcfdbf")),ow=kc(Vb("00000401000501010601010802010a02020c02020e03021004031204031405041706041907051b08051d09061f0a07220b07240c08260d08290e092b10092d110a30120a32140b34150b37160b39180c3c190c3e1b0c411c0c431e0c451f0c48210c4a230c4c240c4f260c51280b53290b552b0b572d0b592f0a5b310a5c320a5e340a5f3609613809623909633b09643d09653e0966400a67420a68440a68450a69470b6a490b6a4a0c6b4c0c6b4d0d6c4f0d6c510e6c520e6d540f6d550f6d57106e59106e5a116e5c126e5d126e5f136e61136e62146e64156e65156e67166e69166e6a176e6c186e6d186e6f196e71196e721a6e741a6e751b6e771c6d781c6d7a1d6d7c1d6d7d1e6d7f1e6c801f6c82206c84206b85216b87216b88226a8a226a8c23698d23698f24699025689225689326679526679727669827669a28659b29649d29649f2a63a02a63a22b62a32c61a52c60a62d60a82e5fa92e5eab2f5ead305dae305cb0315bb1325ab3325ab43359b63458b73557b93556ba3655bc3754bd3853bf3952c03a51c13a50c33b4fc43c4ec63d4dc73e4cc83f4bca404acb4149cc4248ce4347cf4446d04545d24644d34743d44842d54a41d74b3fd84c3ed94d3dda4e3cdb503bdd513ade5238df5337e05536e15635e25734e35933e45a31e55c30e65d2fe75e2ee8602de9612bea632aeb6429eb6628ec6726ed6925ee6a24ef6c23ef6e21f06f20f1711ff1731df2741cf3761bf37819f47918f57b17f57d15f67e14f68013f78212f78410f8850ff8870ef8890cf98b0bf98c0af98e09fa9008fa9207fa9407fb9606fb9706fb9906fb9b06fb9d07fc9f07fca108fca309fca50afca60cfca80dfcaa0ffcac11fcae12fcb014fcb216fcb418fbb61afbb81dfbba1ffbbc21fbbe23fac026fac228fac42afac62df9c72ff9c932f9cb35f8cd37f8cf3af7d13df7d340f6d543f6d746f5d949f5db4cf4dd4ff4df53f4e156f3e35af3e55df2e661f2e865f2ea69f1ec6df1ed71f1ef75f1f179f2f27df2f482f3f586f3f68af4f88ef5f992f6fa96f8fb9af9fc9dfafda1fcffa4")),uw=kc(Vb("0d088710078813078916078a19068c1b068d1d068e20068f2206902406912605912805922a05932c05942e05952f059631059733059735049837049938049a3a049a3c049b3e049c3f049c41049d43039e44039e46039f48039f4903a04b03a14c02a14e02a25002a25102a35302a35502a45601a45801a45901a55b01a55c01a65e01a66001a66100a76300a76400a76600a76700a86900a86a00a86c00a86e00a86f00a87100a87201a87401a87501a87701a87801a87a02a87b02a87d03a87e03a88004a88104a78305a78405a78606a68707a68808a68a09a58b0aa58d0ba58e0ca48f0da4910ea3920fa39410a29511a19613a19814a099159f9a169f9c179e9d189d9e199da01a9ca11b9ba21d9aa31e9aa51f99a62098a72197a82296aa2395ab2494ac2694ad2793ae2892b02991b12a90b22b8fb32c8eb42e8db52f8cb6308bb7318ab83289ba3388bb3488bc3587bd3786be3885bf3984c03a83c13b82c23c81c33d80c43e7fc5407ec6417dc7427cc8437bc9447aca457acb4679cc4778cc4977cd4a76ce4b75cf4c74d04d73d14e72d24f71d35171d45270d5536fd5546ed6556dd7566cd8576bd9586ada5a6ada5b69db5c68dc5d67dd5e66de5f65de6164df6263e06363e16462e26561e26660e3685fe4695ee56a5de56b5de66c5ce76e5be76f5ae87059e97158e97257ea7457eb7556eb7655ec7754ed7953ed7a52ee7b51ef7c51ef7e50f07f4ff0804ef1814df1834cf2844bf3854bf3874af48849f48948f58b47f58c46f68d45f68f44f79044f79143f79342f89441f89540f9973ff9983ef99a3efa9b3dfa9c3cfa9e3bfb9f3afba139fba238fca338fca537fca636fca835fca934fdab33fdac33fdae32fdaf31fdb130fdb22ffdb42ffdb52efeb72dfeb82cfeba2cfebb2bfebd2afebe2afec029fdc229fdc328fdc527fdc627fdc827fdca26fdcb26fccd25fcce25fcd025fcd225fbd324fbd524fbd724fad824fada24f9dc24f9dd25f8df25f8e125f7e225f7e425f6e626f6e826f5e926f5eb27f4ed27f3ee27f3f027f2f227f1f426f1f525f0f724f0f921")),aw=function(t){return function(){return t}},cw=Math.abs,sw=Math.atan2,fw=Math.cos,lw=Math.max,hw=Math.min,pw=Math.sin,dw=Math.sqrt,vw=1e-12,gw=Math.PI,yw=gw/2,_w=2*gw,mw=function(){function t(){var t,s,f=+n.apply(this,arguments),l=+e.apply(this,arguments),h=o.apply(this,arguments)-yw,p=u.apply(this,arguments)-yw,d=cw(p-h),v=p>h;if(c||(c=t=Oe()),lvw)if(d>_w-vw)c.moveTo(l*fw(h),l*pw(h)),c.arc(0,0,l,h,p,!v),f>vw&&(c.moveTo(f*fw(p),f*pw(p)),c.arc(0,0,f,p,h,v));else{var g,y,_=h,m=p,x=h,b=p,w=d,M=d,T=a.apply(this,arguments)/2,N=T>vw&&(i?+i.apply(this,arguments):dw(f*f+l*l)),k=hw(cw(l-f)/2,+r.apply(this,arguments)),S=k,A=k;if(N>vw){var E=Ec(N/f*pw(T)),C=Ec(N/l*pw(T));(w-=2*E)>vw?(E*=v?1:-1,x+=E,b-=E):(w=0,x=b=(h+p)/2),(M-=2*C)>vw?(C*=v?1:-1,_+=C,m-=C):(M=0,_=m=(h+p)/2)}var z=l*fw(_),P=l*pw(_),R=f*fw(b),L=f*pw(b);if(k>vw){var D=l*fw(m),q=l*pw(m),U=f*fw(x),O=f*pw(x);if(dvw?Dc(z,P,U,O,D,q,R,L):[R,L],Y=z-F[0],I=P-F[1],H=D-F[0],B=q-F[1],j=1/pw(Ac((Y*H+I*B)/(dw(Y*Y+I*I)*dw(H*H+B*B)))/2),X=dw(F[0]*F[0]+F[1]*F[1]);S=hw(k,(f-X)/(j-1)),A=hw(k,(l-X)/(j+1))}}M>vw?A>vw?(g=qc(U,O,z,P,l,A,v),y=qc(D,q,R,L,l,A,v),c.moveTo(g.cx+g.x01,g.cy+g.y01),Avw&&w>vw?S>vw?(g=qc(R,L,D,q,f,-S,v),y=qc(z,P,U,O,f,-S,v),c.lineTo(g.cx+g.x01,g.cy+g.y01),S=f;--l)s.point(g[l],y[l]);s.lineEnd(),s.areaEnd()}v&&(g[n]=+e(h,n,t),y[n]=+i(h,n,t),s.point(r?+r(h,n,t):g[n],o?+o(h,n,t):y[n]))}if(p)return s=null,p+""||null}function n(){return bw().defined(u).curve(c).context(a)}var e=Oc,r=null,i=aw(0),o=Fc,u=aw(!0),a=null,c=xw,s=null;return t.x=function(n){return arguments.length?(e="function"==typeof n?n:aw(+n),r=null,t):e},t.x0=function(n){return arguments.length?(e="function"==typeof n?n:aw(+n),t):e},t.x1=function(n){return arguments.length?(r=null==n?null:"function"==typeof n?n:aw(+n),t):r},t.y=function(n){return arguments.length?(i="function"==typeof n?n:aw(+n),o=null,t):i},t.y0=function(n){return arguments.length?(i="function"==typeof n?n:aw(+n),t):i},t.y1=function(n){return arguments.length?(o=null==n?null:"function"==typeof n?n:aw(+n),t):o},t.lineX0=t.lineY0=function(){return n().x(e).y(i)},t.lineY1=function(){return n().x(e).y(o)},t.lineX1=function(){return n().x(r).y(i)},t.defined=function(n){return arguments.length?(u="function"==typeof n?n:aw(!!n),t):u},t.curve=function(n){return arguments.length?(c=n,null!=a&&(s=c(a)),t):c},t.context=function(n){return arguments.length?(null==n?a=s=null:s=c(a=n),t):a},t},Mw=function(t,n){return nt?1:n>=t?0:NaN},Tw=function(t){return t},Nw=function(){function t(t){var a,c,s,f,l,h=t.length,p=0,d=new Array(h),v=new Array(h),g=+i.apply(this,arguments),y=Math.min(_w,Math.max(-_w,o.apply(this,arguments)-g)),_=Math.min(Math.abs(y)/h,u.apply(this,arguments)),m=_*(y<0?-1:1);for(a=0;a0&&(p+=l);for(null!=e?d.sort(function(t,n){return e(v[t],v[n])}):null!=r&&d.sort(function(n,e){return r(t[n],t[e])}),a=0,s=p?(y-h*m)/p:0;a0?l*s:0)+m,v[c]={data:t[c],index:a,value:l,startAngle:g,endAngle:f,padAngle:_};return v}var n=Tw,e=Mw,r=null,i=aw(0),o=aw(_w),u=aw(0);return t.value=function(e){return arguments.length?(n="function"==typeof e?e:aw(+e),t):n},t.sortValues=function(n){return arguments.length?(e=n,r=null,t):e}, -t.sort=function(n){return arguments.length?(r=n,e=null,t):r},t.startAngle=function(n){return arguments.length?(i="function"==typeof n?n:aw(+n),t):i},t.endAngle=function(n){return arguments.length?(o="function"==typeof n?n:aw(+n),t):o},t.padAngle=function(n){return arguments.length?(u="function"==typeof n?n:aw(+n),t):u},t},kw=Ic(xw);Yc.prototype={areaStart:function(){this._curve.areaStart()},areaEnd:function(){this._curve.areaEnd()},lineStart:function(){this._curve.lineStart()},lineEnd:function(){this._curve.lineEnd()},point:function(t,n){this._curve.point(n*Math.sin(t),n*-Math.cos(t))}};var Sw=function(){return Hc(bw().curve(kw))},Aw=function(){var t=ww().curve(kw),n=t.curve,e=t.lineX0,r=t.lineX1,i=t.lineY0,o=t.lineY1;return t.angle=t.x,delete t.x,t.startAngle=t.x0,delete t.x0,t.endAngle=t.x1,delete t.x1,t.radius=t.y,delete t.y,t.innerRadius=t.y0,delete t.y0,t.outerRadius=t.y1,delete t.y1,t.lineStartAngle=function(){return Hc(e())},delete t.lineX0,t.lineEndAngle=function(){return Hc(r())},delete t.lineX1,t.lineInnerRadius=function(){return Hc(i())},delete t.lineY0,t.lineOuterRadius=function(){return Hc(o())},delete t.lineY1,t.curve=function(t){return arguments.length?n(Ic(t)):n()._curve},t},Ew=function(t,n){return[(n=+n)*Math.cos(t-=Math.PI/2),n*Math.sin(t)]},Cw=Array.prototype.slice,zw={draw:function(t,n){var e=Math.sqrt(n/gw);t.moveTo(e,0),t.arc(0,0,e,0,_w)}},Pw={draw:function(t,n){var e=Math.sqrt(n/5)/2;t.moveTo(-3*e,-e),t.lineTo(-e,-e),t.lineTo(-e,-3*e),t.lineTo(e,-3*e),t.lineTo(e,-e),t.lineTo(3*e,-e),t.lineTo(3*e,e),t.lineTo(e,e),t.lineTo(e,3*e),t.lineTo(-e,3*e),t.lineTo(-e,e),t.lineTo(-3*e,e),t.closePath()}},Rw=Math.sqrt(1/3),Lw=2*Rw,Dw={draw:function(t,n){var e=Math.sqrt(n/Lw),r=e*Rw;t.moveTo(0,-e),t.lineTo(r,0),t.lineTo(0,e),t.lineTo(-r,0),t.closePath()}},qw=Math.sin(gw/10)/Math.sin(7*gw/10),Uw=Math.sin(_w/10)*qw,Ow=-Math.cos(_w/10)*qw,Fw={draw:function(t,n){var e=Math.sqrt(.8908130915292852*n),r=Uw*e,i=Ow*e;t.moveTo(0,-e),t.lineTo(r,i);for(var o=1;o<5;++o){var u=_w*o/5,a=Math.cos(u),c=Math.sin(u);t.lineTo(c*e,-a*e),t.lineTo(a*r-c*i,c*r+a*i)}t.closePath()}},Yw={draw:function(t,n){var e=Math.sqrt(n),r=-e/2;t.rect(r,r,e,e)}},Iw=Math.sqrt(3),Hw={draw:function(t,n){var e=-Math.sqrt(n/(3*Iw));t.moveTo(0,2*e),t.lineTo(-Iw*e,-e),t.lineTo(Iw*e,-e),t.closePath()}},Bw=-.5,jw=Math.sqrt(3)/2,Xw=1/Math.sqrt(12),Ww=3*(Xw/2+1),Vw={draw:function(t,n){var e=Math.sqrt(n/Ww),r=e/2,i=e*Xw,o=r,u=e*Xw+e,a=-o,c=u;t.moveTo(r,i),t.lineTo(o,u),t.lineTo(a,c),t.lineTo(Bw*r-jw*i,jw*r+Bw*i),t.lineTo(Bw*o-jw*u,jw*o+Bw*u),t.lineTo(Bw*a-jw*c,jw*a+Bw*c),t.lineTo(Bw*r+jw*i,Bw*i-jw*r),t.lineTo(Bw*o+jw*u,Bw*u-jw*o),t.lineTo(Bw*a+jw*c,Bw*c-jw*a),t.closePath()}},$w=[zw,Pw,Dw,Yw,Fw,Hw,Vw],Zw=function(){function t(){var t;if(r||(r=t=Oe()),n.apply(this,arguments).draw(r,+e.apply(this,arguments)),t)return r=null,t+""||null}var n=aw(zw),e=aw(64),r=null;return t.type=function(e){return arguments.length?(n="function"==typeof e?e:aw(e),t):n},t.size=function(n){return arguments.length?(e="function"==typeof n?n:aw(+n),t):e},t.context=function(n){return arguments.length?(r=null==n?null:n,t):r},t},Gw=function(){};Kc.prototype={areaStart:function(){this._line=0},areaEnd:function(){this._line=NaN},lineStart:function(){this._x0=this._x1=this._y0=this._y1=NaN,this._point=0},lineEnd:function(){switch(this._point){case 3:Jc(this,this._x1,this._y1);case 2:this._context.lineTo(this._x1,this._y1)}(this._line||0!==this._line&&1===this._point)&&this._context.closePath(),this._line=1-this._line},point:function(t,n){switch(t=+t,n=+n,this._point){case 0:this._point=1,this._line?this._context.lineTo(t,n):this._context.moveTo(t,n);break;case 1:this._point=2;break;case 2:this._point=3,this._context.lineTo((5*this._x0+this._x1)/6,(5*this._y0+this._y1)/6);default:Jc(this,t,n)}this._x0=this._x1,this._x1=t,this._y0=this._y1,this._y1=n}};var Qw=function(t){return new Kc(t)};ts.prototype={areaStart:Gw,areaEnd:Gw,lineStart:function(){this._x0=this._x1=this._x2=this._x3=this._x4=this._y0=this._y1=this._y2=this._y3=this._y4=NaN,this._point=0},lineEnd:function(){switch(this._point){case 1:this._context.moveTo(this._x2,this._y2),this._context.closePath();break;case 2:this._context.moveTo((this._x2+2*this._x3)/3,(this._y2+2*this._y3)/3),this._context.lineTo((this._x3+2*this._x2)/3,(this._y3+2*this._y2)/3),this._context.closePath();break;case 3:this.point(this._x2,this._y2),this.point(this._x3,this._y3),this.point(this._x4,this._y4)}},point:function(t,n){switch(t=+t,n=+n,this._point){case 0:this._point=1,this._x2=t,this._y2=n;break;case 1:this._point=2,this._x3=t,this._y3=n;break;case 2:this._point=3,this._x4=t,this._y4=n,this._context.moveTo((this._x0+4*this._x1+t)/6,(this._y0+4*this._y1+n)/6);break;default:Jc(this,t,n)}this._x0=this._x1,this._x1=t,this._y0=this._y1,this._y1=n}};var Jw=function(t){return new ts(t)};ns.prototype={areaStart:function(){this._line=0},areaEnd:function(){this._line=NaN},lineStart:function(){this._x0=this._x1=this._y0=this._y1=NaN,this._point=0},lineEnd:function(){(this._line||0!==this._line&&3===this._point)&&this._context.closePath(),this._line=1-this._line},point:function(t,n){switch(t=+t,n=+n,this._point){case 0:this._point=1;break;case 1:this._point=2;break;case 2:this._point=3;var e=(this._x0+4*this._x1+t)/6,r=(this._y0+4*this._y1+n)/6;this._line?this._context.lineTo(e,r):this._context.moveTo(e,r);break;case 3:this._point=4;default:Jc(this,t,n)}this._x0=this._x1,this._x1=t,this._y0=this._y1,this._y1=n}};var Kw=function(t){return new ns(t)};es.prototype={lineStart:function(){this._x=[],this._y=[],this._basis.lineStart()},lineEnd:function(){var t=this._x,n=this._y,e=t.length-1;if(e>0)for(var r,i=t[0],o=n[0],u=t[e]-i,a=n[e]-o,c=-1;++c<=e;)r=c/e,this._basis.point(this._beta*t[c]+(1-this._beta)*(i+r*u),this._beta*n[c]+(1-this._beta)*(o+r*a));this._x=this._y=null,this._basis.lineEnd()},point:function(t,n){this._x.push(+t),this._y.push(+n)}};var tM=function t(n){function e(t){return 1===n?new Kc(t):new es(t,n)}return e.beta=function(n){return t(+n)},e}(.85);is.prototype={areaStart:function(){this._line=0},areaEnd:function(){this._line=NaN},lineStart:function(){this._x0=this._x1=this._x2=this._y0=this._y1=this._y2=NaN,this._point=0},lineEnd:function(){switch(this._point){case 2:this._context.lineTo(this._x2,this._y2);break;case 3:rs(this,this._x1,this._y1)}(this._line||0!==this._line&&1===this._point)&&this._context.closePath(),this._line=1-this._line},point:function(t,n){switch(t=+t,n=+n,this._point){case 0:this._point=1,this._line?this._context.lineTo(t,n):this._context.moveTo(t,n);break;case 1:this._point=2,this._x1=t,this._y1=n;break;case 2:this._point=3;default:rs(this,t,n)}this._x0=this._x1,this._x1=this._x2,this._x2=t,this._y0=this._y1,this._y1=this._y2,this._y2=n}};var nM=function t(n){function e(t){return new is(t,n)}return e.tension=function(n){return t(+n)},e}(0);os.prototype={areaStart:Gw,areaEnd:Gw,lineStart:function(){this._x0=this._x1=this._x2=this._x3=this._x4=this._x5=this._y0=this._y1=this._y2=this._y3=this._y4=this._y5=NaN,this._point=0},lineEnd:function(){switch(this._point){case 1:this._context.moveTo(this._x3,this._y3),this._context.closePath();break;case 2:this._context.lineTo(this._x3,this._y3),this._context.closePath();break;case 3:this.point(this._x3,this._y3),this.point(this._x4,this._y4),this.point(this._x5,this._y5)}},point:function(t,n){switch(t=+t,n=+n,this._point){case 0:this._point=1,this._x3=t,this._y3=n;break;case 1:this._point=2,this._context.moveTo(this._x4=t,this._y4=n);break;case 2:this._point=3,this._x5=t,this._y5=n;break;default:rs(this,t,n)}this._x0=this._x1,this._x1=this._x2,this._x2=t,this._y0=this._y1,this._y1=this._y2,this._y2=n}};var eM=function t(n){function e(t){return new os(t,n)}return e.tension=function(n){return t(+n)},e}(0);us.prototype={areaStart:function(){this._line=0},areaEnd:function(){this._line=NaN},lineStart:function(){this._x0=this._x1=this._x2=this._y0=this._y1=this._y2=NaN,this._point=0},lineEnd:function(){(this._line||0!==this._line&&3===this._point)&&this._context.closePath(),this._line=1-this._line},point:function(t,n){switch(t=+t,n=+n,this._point){case 0:this._point=1;break;case 1:this._point=2;break;case 2:this._point=3,this._line?this._context.lineTo(this._x2,this._y2):this._context.moveTo(this._x2,this._y2);break;case 3:this._point=4;default:rs(this,t,n)}this._x0=this._x1,this._x1=this._x2,this._x2=t,this._y0=this._y1,this._y1=this._y2,this._y2=n}};var rM=function t(n){function e(t){return new us(t,n)}return e.tension=function(n){return t(+n)},e}(0);cs.prototype={areaStart:function(){this._line=0},areaEnd:function(){this._line=NaN},lineStart:function(){this._x0=this._x1=this._x2=this._y0=this._y1=this._y2=NaN,this._l01_a=this._l12_a=this._l23_a=this._l01_2a=this._l12_2a=this._l23_2a=this._point=0},lineEnd:function(){switch(this._point){case 2:this._context.lineTo(this._x2,this._y2);break;case 3:this.point(this._x2,this._y2)}(this._line||0!==this._line&&1===this._point)&&this._context.closePath(),this._line=1-this._line},point:function(t,n){if(t=+t,n=+n,this._point){var e=this._x2-t,r=this._y2-n;this._l23_a=Math.sqrt(this._l23_2a=Math.pow(e*e+r*r,this._alpha))}switch(this._point){case 0:this._point=1,this._line?this._context.lineTo(t,n):this._context.moveTo(t,n);break;case 1:this._point=2;break;case 2:this._point=3;default:as(this,t,n)}this._l01_a=this._l12_a,this._l12_a=this._l23_a,this._l01_2a=this._l12_2a,this._l12_2a=this._l23_2a,this._x0=this._x1,this._x1=this._x2,this._x2=t,this._y0=this._y1,this._y1=this._y2,this._y2=n}};var iM=function t(n){function e(t){return n?new cs(t,n):new is(t,0)}return e.alpha=function(n){return t(+n)},e}(.5);ss.prototype={areaStart:Gw,areaEnd:Gw,lineStart:function(){this._x0=this._x1=this._x2=this._x3=this._x4=this._x5=this._y0=this._y1=this._y2=this._y3=this._y4=this._y5=NaN,this._l01_a=this._l12_a=this._l23_a=this._l01_2a=this._l12_2a=this._l23_2a=this._point=0},lineEnd:function(){switch(this._point){case 1:this._context.moveTo(this._x3,this._y3),this._context.closePath();break;case 2:this._context.lineTo(this._x3,this._y3),this._context.closePath();break;case 3:this.point(this._x3,this._y3),this.point(this._x4,this._y4),this.point(this._x5,this._y5)}},point:function(t,n){if(t=+t,n=+n,this._point){var e=this._x2-t,r=this._y2-n;this._l23_a=Math.sqrt(this._l23_2a=Math.pow(e*e+r*r,this._alpha))}switch(this._point){case 0:this._point=1,this._x3=t,this._y3=n;break;case 1:this._point=2,this._context.moveTo(this._x4=t,this._y4=n);break;case 2:this._point=3,this._x5=t,this._y5=n;break;default:as(this,t,n)}this._l01_a=this._l12_a,this._l12_a=this._l23_a,this._l01_2a=this._l12_2a,this._l12_2a=this._l23_2a,this._x0=this._x1,this._x1=this._x2,this._x2=t,this._y0=this._y1,this._y1=this._y2,this._y2=n}};var oM=function t(n){function e(t){return n?new ss(t,n):new os(t,0)}return e.alpha=function(n){return t(+n)},e}(.5);fs.prototype={areaStart:function(){this._line=0},areaEnd:function(){this._line=NaN},lineStart:function(){this._x0=this._x1=this._x2=this._y0=this._y1=this._y2=NaN,this._l01_a=this._l12_a=this._l23_a=this._l01_2a=this._l12_2a=this._l23_2a=this._point=0},lineEnd:function(){(this._line||0!==this._line&&3===this._point)&&this._context.closePath(),this._line=1-this._line},point:function(t,n){if(t=+t,n=+n,this._point){var e=this._x2-t,r=this._y2-n;this._l23_a=Math.sqrt(this._l23_2a=Math.pow(e*e+r*r,this._alpha))}switch(this._point){case 0:this._point=1;break;case 1:this._point=2;break;case 2:this._point=3,this._line?this._context.lineTo(this._x2,this._y2):this._context.moveTo(this._x2,this._y2);break;case 3:this._point=4;default:as(this,t,n)}this._l01_a=this._l12_a,this._l12_a=this._l23_a,this._l01_2a=this._l12_2a,this._l12_2a=this._l23_2a,this._x0=this._x1,this._x1=this._x2,this._x2=t,this._y0=this._y1,this._y1=this._y2,this._y2=n}};var uM=function t(n){function e(t){return n?new fs(t,n):new us(t,0)}return e.alpha=function(n){return t(+n)},e}(.5);ls.prototype={areaStart:Gw,areaEnd:Gw,lineStart:function(){this._point=0},lineEnd:function(){this._point&&this._context.closePath()},point:function(t,n){t=+t,n=+n,this._point?this._context.lineTo(t,n):(this._point=1,this._context.moveTo(t,n))}};var aM=function(t){return new ls(t)};gs.prototype={areaStart:function(){this._line=0},areaEnd:function(){this._line=NaN},lineStart:function(){this._x0=this._x1=this._y0=this._y1=this._t0=NaN,this._point=0},lineEnd:function(){switch(this._point){case 2:this._context.lineTo(this._x1,this._y1);break;case 3:vs(this,this._t0,ds(this,this._t0))}(this._line||0!==this._line&&1===this._point)&&this._context.closePath(),this._line=1-this._line},point:function(t,n){var e=NaN;if(t=+t,n=+n,t!==this._x1||n!==this._y1){switch(this._point){case 0:this._point=1,this._line?this._context.lineTo(t,n):this._context.moveTo(t,n);break;case 1:this._point=2;break;case 2:this._point=3,vs(this,ds(this,e=ps(this,t,n)),e);break;default:vs(this,this._t0,e=ps(this,t,n))}this._x0=this._x1,this._x1=t,this._y0=this._y1,this._y1=n,this._t0=e}}},(ys.prototype=Object.create(gs.prototype)).point=function(t,n){gs.prototype.point.call(this,n,t)},_s.prototype={moveTo:function(t,n){this._context.moveTo(n,t)},closePath:function(){this._context.closePath()},lineTo:function(t,n){this._context.lineTo(n,t)},bezierCurveTo:function(t,n,e,r,i,o){this._context.bezierCurveTo(n,t,r,e,o,i)}},bs.prototype={areaStart:function(){this._line=0},areaEnd:function(){this._line=NaN},lineStart:function(){this._x=[],this._y=[]},lineEnd:function(){var t=this._x,n=this._y,e=t.length;if(e)if(this._line?this._context.lineTo(t[0],n[0]):this._context.moveTo(t[0],n[0]),2===e)this._context.lineTo(t[1],n[1]);else for(var r=ws(t),i=ws(n),o=0,u=1;u=0&&(this._t=1-this._t,this._line=1-this._line)},point:function(t,n){switch(t=+t,n=+n,this._point){case 0:this._point=1,this._line?this._context.lineTo(t,n):this._context.moveTo(t,n);break;case 1:this._point=2;default:if(this._t<=0)this._context.lineTo(this._x,n),this._context.lineTo(t,n);else{var e=this._x*(1-this._t)+t*this._t;this._context.lineTo(e,this._y),this._context.lineTo(e,n)}}this._x=t,this._y=n}};var sM=function(t){return new Ms(t,.5)},fM=function(t,n){if((i=t.length)>1)for(var e,r,i,o=1,u=t[n[0]],a=u.length;o=0;)e[n]=n;return e},hM=function(){function t(t){var o,u,a=n.apply(this,arguments),c=t.length,s=a.length,f=new Array(s);for(o=0;o0){for(var e,r,i,o=0,u=t[0].length;o1)for(var e,r,i,o,u,a,c=0,s=t[n[0]].length;c=0?(r[0]=o,r[1]=o+=i):i<0?(r[1]=u,r[0]=u+=i):r[0]=o},vM=function(t,n){if((e=t.length)>0){for(var e,r=0,i=t[n[0]],o=i.length;r0&&(r=(e=t[n[0]]).length)>0){for(var e,r,i,o=0,u=1;u=a)return null;var c=t-i.site[0],s=n-i.site[1],f=c*c+s*s;do{i=o.cells[r=u],u=null,i.halfedges.forEach(function(e){var r=o.edges[e],a=r.left;if(a!==i.site&&a||(a=r.right)){var c=t-a[0],s=n-a[1],l=c*c+s*s;lz}i.zoom("mouse",m(r(i.that.__zoom,i.mouse[0]=Al(i.that),i.mouse[1]),i.extent,M))}function e(){o.on("mousemove.zoom mouseup.zoom",null),xt(t.event.view,i.moved),LM(),i.end()}if(!v&&y.apply(this,arguments)){var i=u(this,arguments),o=fh(t.event.view).on("mousemove.zoom",n,!0).on("mouseup.zoom",e,!0),a=Al(this),c=t.event.clientX,s=t.event.clientY;vh(t.event.view),ff(),i.mouse=[a,this.__zoom.invert(a)],Gp(this),i.start()}}function f(){if(y.apply(this,arguments)){var i=this.__zoom,u=Al(this),a=i.invert(u),c=i.k*(t.event.shiftKey?.5:2),s=m(r(e(i,c),u,a),_.apply(this,arguments),M);LM(),T>0?fh(this).transition().duration(T).call(o,s,u):fh(this).call(n.transform,s)}}function l(){if(y.apply(this,arguments)){var n,e,r,i,o=u(this,arguments),a=t.event.changedTouches,c=a.length;for(ff(),e=0;e-1)&&(t.push(this.parentNode),!0)}).select(function(){return this.parentNode})},IM=function(t){var n,e=El(t),r=UM(t);t=_l(r.tag),n=this.select(function(){return e.apply(this,arguments)||this.appendChild(t.apply(this,arguments))});for(var i in r.attr)n.attr(i,r.attr[i]);return n},HM=function(t,n){return this.selectAll("tspan").data(function(n){return("function"==typeof t?t(n):t).map(function(t){return{line:t,parent:n}})}).enter().append("tspan").text(function(t){return t.line}).attr("x",0).attr("dy",function(t,e){return e?("function"==typeof n?n(t.parent,t.line,e):n)||15:0})},BM=function(t,n){if("string"==typeof n){console.warn("DEPRECATED: jetpack's appendMany order of arguments has changed. It's appendMany('div', data) from now on");var e=n;n=t,t=e}return this.selectAll(null).data(n).enter().append(t)},jM=function(t,n){if("object"==typeof t){for(var e in t)this.attr(e.replace(/([a-z\d])([A-Z])/g,"$1-$2").toLowerCase(),t[e]);return this}return 1==arguments.length?this.attr(t):this.attr(t,n)};_f.not=function(t){return!t},_f.run=function(t){return t()},_f.objToFn=function(t,n){return 1==arguments.length&&(n=void 0),function(e){return void 0!==t[e]?t[e]:n}};var XM=function(t,n){function e(t,n,e){return n=n.replace(/([a-z\d])([A-Z])/g,"$1-$2").toLowerCase(),~"top left bottom right padding-top padding-left padding-bottom padding-right border-top b-width border-left-width border-botto-width m border-right-width margin-top margin-left margin-bottom margin-right font-size width height stroke-width line-height margin padding border border-radius max-width min-width".indexOf(n)?t.style(n,"function"==typeof e?i(e):r(e)):t.style(n,e),t}function r(t){return t.match?t:t+"px"}function i(t){return function(){return r(t.apply(this,arguments))}}if("object"==typeof t){for(var o in t)e(this,o,t[o]);return this}return 1==arguments.length?this.style(t):e(this,t,n)},WM={A:7,a:7,B:8,b:7,C:8,c:6,D:9,d:7,E:7,e:7,F:7,f:4,G:9,g:7,H:9,h:7,I:3,i:3,J:5,j:3,K:8,k:6,L:7,l:3,M:11,m:11,N:9,n:7,O:9,o:7,P:8,p:7,Q:9,q:7,R:8,r:4,S:8,s:6,T:7,t:4,U:9,u:7,V:7,v:6,W:11,w:9,X:7,x:6,Y:7,y:6,Z:7,z:5,".":2,",":2,":":2,";":2},VM=function(t,n,e,r){function i(t){return!r&&WM[t]||WM.a}function o(t){return t.length}function u(t,n){return t-n}var a,c,s,f,l,h,p=[],d=[],v=[];return c=t.split(" "),c.forEach(function(t,n){var e=t.split("-");e.length>1?e.forEach(function(t,n){d.push(t+(nl&&a>h&&(p.push(v.join("")),v.length=0,a=0),a+=n,v.push(t)}),v.length&&p.push(v.join("")),p.filter(function(t){return""!==t})},$M=function(t){return"function"==typeof t?function(n,e){return t(n)t(e)?1:t(n)>=t(e)?0:NaN}:function(n,e){return n[t]e[t]?1:n[t]>=e[t]?0:NaN}},ZM=function(t){return"function"==typeof t?function(n,e){return t(e)t(n)?1:t(e)>=t(n)?0:NaN}:function(n,e){return e[t]n[t]?1:e[t]>=n[t]?0:NaN}},GM=function(t){t=t||{},t.margin=t.margin||{},["top","right","bottom","left"].forEach(function(n){t.margin[n]||0===t.margin[n]||(t.margin[n]=20)}),t.parentSel&&(t.sel=t.parentSel);var n=t.sel&&t.sel.node();return t.totalWidth=t.totalWidth||n&&n.offsetWidth||960,t.totalHeight=t.totalHeight||n&&n.offsetHeight||500,t.width=t.width||t.totalWidth-t.margin.left-t.margin.right,t.height=t.height||t.totalHeight-t.margin.top-t.margin.bottom, -t.totalWidth=t.width+t.margin.left+t.margin.right,t.totalHeight=t.height+t.margin.top+t.margin.bottom,t.sel=t.sel||fh("body"),t.sel.st({position:"relative",height:t.totalHeight,width:t.totalWidth}),t.x=t.x||Wu().range([0,t.width]),t.y=t.y||Wu().range([t.height,0]),t.xAxis=t.xAxis||d().scale(t.x),t.yAxis=t.yAxis||v().scale(t.y),t.layers=(t.layers||"s").split("").map(function(n){var e;if("s"==n)e=t.sel.append("svg").st({position:t.layers?"absolute":""}).attr("width",t.totalWidth).attr("height",t.totalHeight).append("g").attr("transform","translate("+t.margin.left+","+t.margin.top+")"),t.svg||(t.svg=e);else if("c"==n){var r=window.devicePixelRatio||1;e=t.sel.append("canvas").at({width:t.totalWidth*r,height:t.totalHeight*r}).st({width:t.totalWidth,height:t.totalHeight}).st({position:"absolute"}).node().getContext("2d"),e.scale(r,r),e.translate(t.margin.left,t.margin.top)}else"d"==n&&(e=t.sel.append("div").st({position:"absolute",left:t.margin.left,top:t.margin.top,width:t.width,height:t.height}));return e}),t},QM=function(t){return{xAxisSel:t.svg.append("g").attr("class","x axis").attr("transform","translate(0,"+t.height+")").call(t.xAxis),yAxisSel:t.svg.append("g").attr("class","y axis").call(t.yAxis)}},JM=function(t,n,e){return Math.max(t,Math.min(e,n))},KM=function(n,e,r){function i(t){e.classed("tooltip-hidden",!1).html("").appendMany("div",r).html(function(n){return n(t)}),fh(this).classed("tooltipped",!0)}function o(n){if(e.size()){var r=t.event,i=r.clientX,o=r.clientY,u=e.node().getBoundingClientRect(),a=JM(20,i-u.width/2,window.innerWidth-u.width-20),c=innerHeight>o+20+u.height?o+20:o-u.height-20;e.style("left",a+"px").style("top",c+"px")}}function u(t){e.classed("tooltip-hidden",!0),lh(".tooltipped").classed("tooltipped",!1)}if(n.size()){e=e||fh(".tooltip"),n.on("mouseover.attachTooltip",i).on("mousemove.attachTooltip",o).on("mouseout.attachTooltip",u).on("click.attachTooltip",function(t){console.log(t)});var a=n.datum();r=r||wv(a).filter(function(t){return"object"!=typeof a[t]&&"array"!=a[t]}).map(function(t){return function(n){return t+": "+n[t]+""}})}},tT=function(){var t=Cu(),n=[].slice.call(arguments),e=n.slice(0,n.length-1),r=n[n.length-1];e.forEach(function(n){var e=n.split("?")[0].split(".").reverse()[0],i={csv:_x,tsv:mx,json:dx}[e];if(!i)return r(new Error("Invalid type",n));t.defer(i,n)}),t.awaitAll(r)},nT=function(t,n){return xv().key(n).entries(t).map(function(t){return t.values.key=t.key,t.values})},eT=function(t,n){return n?Math.round(t*(n=Math.pow(10,n)))/n:Math.round(t)},rT=function(t,n){for(var e,r,i,o,u,a,c=bf(n),s=-1,f=t.length-bf(t),l=t[f-1];++s { - var r = rawdigits.data[labelIndex*28*28 + j*28 + i + 0] - var g = rawdigits.data[labelIndex*28*28 + j*28 + i + 0] - var b = rawdigits.data[labelIndex*28*28 + j*28 + i + 0] - - ctx.beginPath() - ctx.fillStyle = `rgb(${r},${g},${b})` - ctx.rect(i*s + offsetX, j*s + offsetY, s, s) - ctx.fill() - }) - } - - function decorateDigitMetadata(digitMetadata){ - digitMetadata.forEach(d => { - delete d[''] - d.i = +d.i - d.label = +d.y - d.priv_order = +d.priv_order - }) - - var byLabel = d3.nestBy(digitMetadata, d => d.y) - byLabel = _.sortBy(byLabel, d => d.key) - byLabel.forEach(digit => { - digit.forEach((d, i) => d.labelIndex = i) - }) - - return {digitMetadata, byLabel} - } - - var colors = [d3.interpolateTurbo(.15), d3.interpolateTurbo(.85)] - var epsilonExtent = [400000, .01] - // var epsilonExtent = [65, .01] - - - var addAxisLabel = (c, xText, yText, xOffset=40, yOffset=-40) => { - c.svg.select('.x').append('g') - .translate([c.width/2, xOffset]) - .append('text.axis-label') - .text(xText) - .at({textAnchor: 'middle'}) - .st({fill: '#000', fontSize: 14}) - - c.svg.select('.y') - .append('g') - .translate([yOffset, c.height/2]) - .append('text.axis-label') - .text(yText) - .at({textAnchor: 'middle', transform: 'rotate(-90)'}) - .st({fill: '#000', fontSize: 14}) - } - - var ggPlotBg = (c, isBlack=true) => { - if (!isBlack){ - c.svg.append('rect') - .at({width: c.width, height: c.height, fill: '#eee'}) - .lower() - } - - c.svg.selectAll('.tick').selectAll('line').remove() - c.svg.selectAll('.y .tick') - .append('path').at({d: 'M 0 0 H ' + c.width, stroke: '#fff', strokeWidth: 1}) - c.svg.selectAll('.y text').at({x: -3}) - c.svg.selectAll('.x .tick') - .append('path').at({d: 'M 0 0 V -' + c.height, stroke: '#fff', strokeWidth: 1}) - } - - - return {data, getFile, drawDigit, colors, epsilonExtent, addAxisLabel, ggPlotBg, decorateDigitMetadata} -})() - - - - - - -// mnist_train.csv -// mnist_train_raw.npy -// umap_train_0.npy -// umap_train_1.npy -// umap_train_2.npy -// umap_train_3.npy -// umap_train_4.npy -// umap_train_5.npy -// umap_train_6.npy -// umap_train_7.npy -// umap_train_8.npy -// umap_train_9.npy -// umap_train_all.npy diff --git a/spaces/merve/fill-in-the-blank/public/anonymization/annotations.js b/spaces/merve/fill-in-the-blank/public/anonymization/annotations.js deleted file mode 100644 index ed45db46369d1bb2a709b20bd97c29451d4284c0..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/public/anonymization/annotations.js +++ /dev/null @@ -1,38 +0,0 @@ -var annotations = - -[ -] - - - - -function addSwoop(c){ - var swoopy = d3.swoopyDrag() - .x(d => c.x(d.x)) - .y(d => c.y(d.y)) - .draggable(0) - .annotations(annotations) - - var swoopySel = c.svg.append('g.annotations').call(swoopy) - - c.svg.append('marker#arrow') - .attr('viewBox', '-10 -10 20 20') - .attr('markerWidth', 20) - .attr('markerHeight', 20) - .attr('orient', 'auto') - .append('path').at({d: 'M-6.75,-6.75 L 0,0 L -6.75,6.75'}) - - - swoopySel.selectAll('path').attr('marker-end', 'url(#arrow)') - window.annotationSel = swoopySel.selectAll('g') - .st({fontSize: 12, opacity: d => d.slide == 0 ? 1 : 0}) - - swoopySel.selectAll('text') - .each(function(d){ - d3.select(this) - .text('') //clear existing text - .tspans(d3.wordwrap(d.text, d.width || 20), 12) //wrap after 20 char - }) -} - - diff --git a/spaces/merve/gr-blocks/README.md b/spaces/merve/gr-blocks/README.md deleted file mode 100644 index 7d3c2210844ea61575c0ab1e60582d3c3cc5691e..0000000000000000000000000000000000000000 --- a/spaces/merve/gr-blocks/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Gr Blocks -emoji: 📊 -colorFrom: indigo -colorTo: gray -sdk: gradio -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/scatter-plot-colab/spearman-distribution/init.js b/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/scatter-plot-colab/spearman-distribution/init.js deleted file mode 100644 index 45e4fafb63a667109fdf81c03ed1d375027ae462..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/server-side/fill-in-the-blank/scatter-plot-colab/spearman-distribution/init.js +++ /dev/null @@ -1,168 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - -// console.clear() - -window.init = function(){ - var initFns = [window.initUtil, window.initScatter, window.initPair] - if (!initFns.every(d => d)) return - - window.util = initUtil() - - window.tidy = d3.csvParse(python_data.tidyCSV, d => { - return { - e0: +d.e0, - e1: +d.e1, - i0: +d.i0, - i1: +d.i1, - tokenIndex: +d.tokenIndex, - sentenceIndex: +d.sentenceIndex, - } - }) - - var bySentence = d3.nestBy(tidy, d => d.sentenceIndex) - bySentence.forEach(sent => { - sent.sentenceIndex = +sent.key - sent.s0 = python_data.sentences[sent.sentenceIndex].s0 - sent.s1 = python_data.sentences[sent.sentenceIndex].s1 - sent.orig = python_data.sentences[sent.sentenceIndex].orig - - sent.corrA = ss.sampleCorrelation(sent.map(d => d.i0), sent.map(d => d.i1)) - // sent.corrA = ss.sampleCorrelation(sent.map(d => d.e0), sent.map(d => d.e1)) - }) - - var sel = d3.select('.container').html(` -
-
-
-
-
-
-
- `) - .st({width: 1100}) - d3.selectAll('.left,.right').st({width: 500, display: 'inline-block', verticalAlign: 'top'}) - - function initBeeswarm(bySentence, sel){ - var c = d3.conventions({ - sel: sel.append('div'), - height: 80, - totalWidth: 400, - margin: {left: 0, top: 18} - }) - - c.x.domain(d3.extent(bySentence.map(d => +d.corrA))).nice() - // c.x.domain([0, 1]) - c.xAxis.ticks(5) - d3.drawAxis(c) - util.ggPlotBg(c) - c.svg.select('.y').remove() - c.svg.selectAll('.tick').st({display: 'block'}) - - var simulation = d3.forceSimulation(bySentence) - .force("x", d3.forceX(d => c.x(d.corrA)).strength(1)) - .force("y", d3.forceY(c.height / 2)) - .force("collide", d3.forceCollide(4)) - .stop() - - for (var i = 0; i < 120; ++i) simulation.tick() - - c.svg.append('text').text('text') - .text('Distribution of Spearman Correlation Coefficients') - .at({dy: -5, fontWeight: 600}) - - c.svg.appendMany('circle.sentence', bySentence) - .translate(d => [d.x, d.y]) - .at({ - r: 3, - fill: 'none', - stroke: '#000' - }) - .on('mouseover', setSentenceAsPair) - } - initBeeswarm(bySentence, d3.select('.beeswarm')) - - - function initList(bySentence, sel){ - // var sentenceSel = sel.st({height: 500, overflowY: 'scroll', cursor: 'default'}) - // .appendMany('div.sentence', _.sortBy(bySentence, d => d.corrA)) - // .on('mouseover', setSentenceAsPair) - // .st({padding: 2, fontSize: 12}) - - // sentenceSel.append('span') - // .text(d => (d3.format('+.2f')(d.corrA)).replace('0.', '.')) - // .st({marginRight: 10, color: '#aaa'}) - - // sentenceSel.append('span') - // .text(d => d.orig.replace('[', '').replace(']', '')) - - var tableSel = sel - .st({height: 470 + 17, overflowY: 'scroll', cursor: 'default', position: 'relative', left: -40}) - .append('table') - .st({fontSize: 12}) - - tableSel.append('tr.header') - .html(` - corr - template - `) - - var rowSel = tableSel - .appendMany('tr.sentence', _.sortBy(bySentence, d => d.corrA)) - .on('mouseover', setSentenceAsPair) - .st({padding: 2, fontSize: 12}) - .html(d => ` - ${(d3.format('+.2f')(d.corrA)).replace('0.', '.')} - ${d.orig.replace('[', '').replace(']', '')} - `) - } - initList(bySentence, d3.select('.list')) - - - - function setSentenceAsPair(s){ - s.e0 = d3.range(python_data.vocab.length).map(d => -Infinity) - s.e1 = d3.range(python_data.vocab.length).map(d => -Infinity) - s.forEach(d => { - s.e0[d.tokenIndex] = d.e0 - s.e1[d.tokenIndex] = d.e1 - }) - - s.label0 = s.s0 - s.label1 = s.s1 - s.vocab = python_data.vocab - s.count = python_settings.count || 150 - s.isDifference = python_settings.isDifference - - var sel = d3.select('.pair').html('').st({width: 400}) - - initPair(s, sel) - - d3.selectAll('.sentence').classed('active', d => d == s) - - d3.selectAll('div.sentence').filter(d => d == s) - .each(function(){ - this.scrollIntoView({ block: 'nearest', inline: 'nearest'}) - }) - } - - setSentenceAsPair(bySentence[0]) - -} - - -window.init() - diff --git a/spaces/merve/uncertainty-calibration/public/dataset-worldviews/shapes.js b/spaces/merve/uncertainty-calibration/public/dataset-worldviews/shapes.js deleted file mode 100644 index 87af55b4829a78b48dc41f6674c12cd58cfc3741..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/public/dataset-worldviews/shapes.js +++ /dev/null @@ -1,248 +0,0 @@ - -// Space out the shapes a bit -shapeParams.forEach((d) => (d.startX = d.startX * 1.1)); - -// How to draw the background boxes, which will be styled later -const classifierBgPathTop = "M 420 150 H 0 V 0 H 420 V 150"; -const classifierBgPathBottom = "M 420 300 H 0 V 0 H 420 V 300"; - -const toDropdownValueStringDict = { - shape_name: "circles, triangles, or rectangles", - pointiness: "pointy shapes or round shapes", - size: "small shapes or big shapes", -}; - -const toShortValueStringDict = { - shape_name: "circles, triangles, or rectangles", - pointiness: "pointy or round", - size: "small or big", -}; - -const toDropdownValueRoundingStringDict = { - true: "with our best guess", - false: 'as "other"', -}; - -const toPropertyStringDict = { - pointy: "pointy shapes", - round: "round shapes", - small: "small shapes", - large: "big shapes", - circle: "circles", - triangle: "triangles", - rect: "rectangles", -}; - -function toOriginalString(inputString) { - for (const [key, value] of Object.entries(toPropertyStringDict)) { - if (inputString == value) { - return key; - } - } -} - -function toPropertyString(inputProperty, isRounding = true) { - if (!isRounding && inputProperty.startsWith("rt_")) { - return "others"; - } - return toPropertyStringDict[inputProperty.replace("rt_", "")]; -} - -// Dictionary mapping div name to classifier results and summary sentences -var allResults = {}; -var summaries = {}; - -function toBool(inputString) { - if (inputString == "true") { - return true; - } - return false; -} -function updateResults() { - allResults["default-classifier"] = calculateResults(); - allResults["second-classifier"] = calculateResults( - "shape_name", - toBool( - document.getElementById("second-classifier-select-rounding").value - ) - ); - - allResults["final-classifier"] = calculateResults( - document.getElementById("final-classifier-select-category").value, - toBool( - document.getElementById("final-classifier-select-rounding").value - ) - ); - - allResults["conclusion"] = calculateResults( - document.getElementById("conclusion-select-category").value, - true - ); - - updateSummaries(); - updateSecondInterfaceImages(); -} - -// Text summaries are written by hand for simplicity, and keyed simply by -// a string of the form "[category]:[useGuess]" (or simply "none"). -// These are hashed in the same way as the results, by div name. -function updateSummaries() { - summaries["default-classifier"] = getPerformanceSummary("none"); - summaries["second-classifier"] = getPerformanceSummary( - "shape_name:" + - document.getElementById("second-classifier-select-rounding").value - ); - - summaries["final-classifier"] = getPerformanceSummary( - document.getElementById("final-classifier-select-category").value + - ":" + - document.getElementById("final-classifier-select-rounding").value - ); - - summaries["conclusion"] = getPerformanceSummary( - document.getElementById("conclusion-select-category").value + ":" + true - ); -} - -// Yes, these background colors are hardcoded in, -// no, this is not good design, this is just how it happened. -function getPerformanceSummary(key) { - allSummaries = { - "shape_name:true": - 'well on circles, terribly on triangles, and best on rectangles', - "shape_name:false": - 'poorly on circles, best on triangles and rectangles, and fine on other shapes', - "pointiness:true": - 'better on pointy shapes and worse on round shapes', - "pointiness:false": - 'best on pointy shapes, fine on round shapes, and poorly on other shapes', - "size:true": - 'better on small shapes, worse on big shapes', - "size:false": - 'poorly on small shapes, terribly on big shapes, and best on other shapes', - "none:true": - 'fine on all shapes', - "none:false": - 'fine on all shapes', - none: 'fine on all shapes', - }; - - return "The Is-Shaded Classifier performs " + allSummaries[key] + "."; -} - -// On the second-classifier dropdown, update the "task interface" image. -function updateSecondInterfaceImages() { - d3.select(".second-interface").html(function () { - if ( - !document.getElementById("second-classifier-select-rounding").value - ) { - return; - } - var imgPath = - "img/interface_shape_name_" + - document.getElementById("second-classifier-select-rounding").value; - return ( - '' - ); - }); -} - -// Calculate results given input parameters -function calculateResults(property = "none", useGuess = false) { - switch (property) { - case "none": - var nAccurate = shapeParams.filter( - (shape) => shape.correctness == "correct" - ).length; - var totalShapes = shapeParams.length; - - var results = [ - { - object: "shape", - n: totalShapes, - "n correct": nAccurate, - accuracy: (nAccurate / totalShapes).toFixed(3), - rawCategoryName: "none", - }, - ]; - - return results; - case "pointiness": - categories = ["pointy", "round"]; - break; - case "size": - categories = ["small", "large"]; - break; - case "shape_name": - categories = ["circle", "triangle", "rect"]; - break; - } - - var results = []; - if (useGuess == true) { - // Rounding shapes to categories - - for (const category of categories) { - // Get shapes that are either in this category (e.g. rectangle) or "rounds to" this category (e.g. rt_rectangle) - var theseShapes = shapeParams.filter( - (shape) => - shape[property] == category || - shape[property] == "rt_" + category - ); - var nAccurate = theseShapes.filter( - (shape) => shape.correctness == "correct" - ).length; - var totalShapes = theseShapes.length; - - results.push({ - object: toPropertyString(category), - n: totalShapes, - "n correct": nAccurate, - accuracy: (nAccurate / totalShapes).toFixed(3), - rawCategoryName: category, - }); - } - } else { - // Not rounding, treat everything else as "other" - - // First go through existing categories - for (const category of categories) { - var theseShapes = shapeParams.filter( - (shape) => shape[property] == category - ); - var nAccurate = theseShapes.filter( - (shape) => shape.correctness == "correct" - ).length; - var totalShapes = theseShapes.length; - results.push({ - object: toPropertyString(category), - n: totalShapes, - "n correct": nAccurate, - accuracy: (nAccurate / totalShapes).toFixed(3), - rawCategoryName: category, - }); - } - - // Now get "other" shapes - var theseShapes = shapeParams.filter( - (shape) => !categories.includes(shape[property]) - ); - var nAccurate = theseShapes.filter( - (shape) => shape.correctness == "correct" - ).length; - var totalShapes = theseShapes.length; - results.push({ - object: "other shapes", - n: totalShapes, - "n correct": nAccurate, - accuracy: (nAccurate / totalShapes).toFixed(3), - rawCategoryName: "other", - }); - } - - return results; -} diff --git a/spaces/merve/uncertainty-calibration/public/uncertainty-calibration/index.html b/spaces/merve/uncertainty-calibration/public/uncertainty-calibration/index.html deleted file mode 100644 index 4bf1bcdddc46b7f3aac6e75626f6d44cf6dd2b7e..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/public/uncertainty-calibration/index.html +++ /dev/null @@ -1,225 +0,0 @@ - - - - - - - - - - - - - - - - - - Are Model Predictions Probabilities? - - - - - - - - - - - - - - - -
- -
- -

Are Model Predictions Probabilities?

- -
-
-
- -
- -If a machine learning model tells you that it’s going to rain tomorrow with a score of 0.60, should you buy an umbrella?1 - -

In the diagram, we have a hypothetical machine learning classifier for predicting rainy days. For each date, the classifier reads in relevant signals like temperature and humidity and spits out a number between 0 and 1. Each data point represents a different day, with the position representing the model’s prediction for rain that day and the symbol (🌧️ or ☀️) representing the true weather that occurred that day. - -

Do the model’s predictions tell us the probability of rain?
- -

In general, machine learning classifiers don’t just give binary predictions, but instead provide some numerical value between 0 and 1 for their predictions. This number, sometimes called the model score or confidence, is a way for the model to express their certainty about what class the input data belongs to. In most applications, the exact score is ignored and we use a threshold to round the score to a binary answer, yes or no, rain or not. However, by using calibration we can transform these scores into probabilities and use them more effectively in decision making. - -

- -

Thresholding

- -

One traditional approach to using a model’s score is through thresholding. In this setting, you choose a threshold t and then declare that the model thinks it’s going to rain if the score is above t and it’s not if the score is below, thereby converting the score to a binary outcome. When you observe the actual weather, you know how often it was wrong and can compute key aggregate statistics like accuracy. - -

We can sometimes treat these aggregate statistics themselves as probabilities. For example, accuracy is the probability that the binary prediction of your model (rain or not) is equal to the ground truth (🌧️ or ☀️). -

- -

Adjustable Thresholding

- -

The threshold can easily be changed after the model is trained. - -

Thresholding uses the model’s score to make a decision, but fails to consider the model’s confidence. The model score is only used to decide whether you are above or below the threshold, but the magnitude of the difference isn’t considered. For example, if you threshold at 0.4, the model’s predictions of 0.6 and 0.9 are treated the same, even though the model is much more confident in the latter. - -

Can we do a better job of incorporating the model score into our understanding of the model?
- -
- -

Calibration

- -

Calibration lets us compare our model scores directly to probabilities. - -

For this technique, instead of one threshold, we have many, which we use to split the predictions into buckets. Again, once we observe the ground truth, we can see what proportion of the predictions in each bucket were rainy days (🌧️). This proportion is the empirical probability of rain for that bucket. - -

Ideally, we want this proportion to be higher for higher buckets, so that the probability is roughly in line with the average prediction for that bucket. We call the difference between the proportion and the predicted rates the calibration error, and by averaging over all of the buckets, we can calculate the Expected Calibration Error. If the proportions and the predictions line up for our use case, meaning the error is low, then we say the model is “well-calibrated” and we can consider treating the model score as the probability that it will actually rain. -

- -

Adjusting Calibration

- -

We saw above that a well-calibrated model allows us to treat our model score as a kind of probability. But if we start with a poorly calibrated model, one which is over or under-confident. Is there anything we can do to improve it? - -

It turns out that, in many settings, we can adjust the model score without really changing the model’s decisions, as long as our adjustment preserves the order of the scores2. For example, if we map all of the scores from our original model to their squares, we don’t change the order of the data with respect to the model score. Thus, quantities like accuracy will stay the same as long as we appropriately map the threshold to its square as well. However, these adjustments do change the calibration of a model by changing which data points lie in which buckets. - -

Try tweaking the thresholds to calibrate the model scores for our data3 – how much can you improve the model’s calibration?
- -

In general, we don’t have to rely on tweaking the model scores by hand to improve calibration. If we are trying to calibrate the model for a particular data distribution, we can use mathematical techniques like Isotonic Regression or Platt Scaling to generate the correct remapping for model scores. -

- -

Shifting Data

- -

While good calibration is an important property for a model’s scores to be interpreted as probabilities, it alone does not capture all aspects of model uncertainty. - -

What happens if it starts to rain less frequently after we’ve trained and calibrated our model? Notice how the calibration drops, even if we use the same calibrated model scores as before. - -

Models are usually only well calibrated with respect to certain data distributions. If the data changes significantly between training and serving time, our models might cease to be well calibrated and we can’t rely on using our model scores as probabilities. -

- -

Beyond Calibration

- -

Calibration can sometimes be easy to game. For example, if we knew that it rains 50% of the time over the course of the year, then we could create a model with a constant prediction of 0.5 every day. This would have perfect calibration, despite not being a very useful model for distinguishing day-to-day differences in the probability of rain. This highlights an important issue: - -

Better calibration doesn’t mean more accurate predictions.
- -

It turns out that statisticians identified the issue with focusing solely on calibration in meteorology when comparing weather forecasts, and came up with a solution. Proper scoring rules provide an alternative approach to measuring the quality of probabilistic forecasts, by using a formula to measure the distance between the model’s predictions and the true event probabilities. These rules guarantee that a better value must mean a better prediction in terms of accuracy and calibration. Such rules incentivize models to be both better calibrated and more accurate.
-

-
-
- - -

More Reading

- -

This post is only the beginning of the discussion on the connections between machine learning models, probability, and uncertainty. In practice, when developing machine learning models with uncertainty in mind, we may need to go beyond calibration. - -

In some settings, errors are not all equal. For example, if we are training a classifier to predict if a patient needs to be tested for a disease, then a false negative (missing a case of the disease) may be more detrimental than a false positive (accidentally having a patient tested). In such cases, we may not want a perfectly calibrated model, but may want to skew the model scores towards one class or another. The field of Statistical Decision Theory provides us with tools to determine how to better use model scores in this more general setting. Calibration may also lead to tension with other important goals like model fairness in some applications. - -

Beyond this, so far we’ve only considered the case of using a single model score, i.e. a point estimate. If we trained the model a thousand times with different random seeds, or resampled the training data, we would almost certainly generate a collection of different model scores for a given input. To truly unpack the different sources of uncertainty that we might encounter, we might want to look towards distributional approaches to measuring uncertainty, using techniques like Deep Ensembles or Bayesian modeling. We will dig deeper into these in future posts. - -

Credits

- -

Nithum Thain, Adam Pearce, Jasper Snoek & Mahima Pushkarna // March 2022 - -

Thanks to Balaji Lakshminarayanan, Emily Reif, Lucas Dixon, Martin Wattenberg, Fernanda Viégas, Ian Kivlichan, Nicole Mitchell, and Meredith Morris for their help with this piece. - -

Footnotes

- -

Your decision might depend both on the probability of rain and its severity (i.e. how much rain there is going to be). We’ll focus just on the probability for now. - -

Applying a strictly monotonic function to the model always keeps the order of scores the same. - -

In this example, we adjust the model scores by changing the model scores of elements within a bucket to the mean of the bucket.

More Explorables

- -

- - - - - -

- - -

- - - - - - - - - - - - -

- - - - - - - - - \ No newline at end of file diff --git a/spaces/merve/uncertainty-calibration/server-side/fill-in-the-blank/README.md b/spaces/merve/uncertainty-calibration/server-side/fill-in-the-blank/README.md deleted file mode 100644 index e57e5a3ca7690ba5b38b163530268b20ab7f5010..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/server-side/fill-in-the-blank/README.md +++ /dev/null @@ -1,39 +0,0 @@ -# Python - -## Setup - -Install dependencies - -``` -python3 -m venv env -source env/bin/activate -pip install -r py/requirements.txt -``` - -Download a copy of model weights - -``` -curl https://storage.googleapis.com/uncertainty-over-space/zari-bert-cda/pytorch_model.bin -o zari-bert-cda/pytorch_model.bin - -curl https://huggingface.co/bert-large-uncased-whole-word-masking/resolve/main/pytorch_model.bin -0 bert-large-uncased-whole-word-masking/pytorch_model.bin -``` - -Start server - -``` -source env/bin/activate -cd py && python main.py -``` - -## Deploy - -The `py` folder is bundled with docker and deployed to [Cloud Run](https://cloud.google.com/run/docs/quickstarts/build-and-deploy/python). - -``` -cd py - -gcloud builds submit --tag gcr.io/uncertainty-over-space/helloworld --project=uncertainty-over-space && gcloud run deploy --image gcr.io/uncertainty-over-space/helloworld --project=uncertainty-over-space -``` - -https://huggingface.co/blog/how-to-deploy-a-pipeline-to-google-clouds - diff --git a/spaces/metricspace/OcTra/df_local/logger.py b/spaces/metricspace/OcTra/df_local/logger.py deleted file mode 100644 index 4cb8c2b1eef32a9248be7b3a65cffd054b0871c7..0000000000000000000000000000000000000000 --- a/spaces/metricspace/OcTra/df_local/logger.py +++ /dev/null @@ -1,212 +0,0 @@ -import os -import sys -import warnings -from collections import defaultdict -from copy import deepcopy -from typing import Dict, Optional, Tuple - -import numpy as np -import torch -from loguru import logger -from torch.types import Number - -from df_local.modules import GroupedLinearEinsum -from df_local.utils import get_branch_name, get_commit_hash, get_device, get_host - -_logger_initialized = False -WARN_ONCE_NO = logger.level("WARNING").no + 1 -DEPRECATED_NO = logger.level("WARNING").no + 2 - - -def init_logger(file: Optional[str] = None, level: str = "INFO", model: Optional[str] = None): - global _logger_initialized, _duplicate_filter - if _logger_initialized: - logger.debug("Logger already initialized.") - else: - logger.remove() - level = level.upper() - if level.lower() != "none": - log_format = Formatter(debug=logger.level(level).no <= logger.level("DEBUG").no).format - logger.add( - sys.stdout, - level=level, - format=log_format, - filter=lambda r: r["level"].no not in {WARN_ONCE_NO, DEPRECATED_NO}, - ) - if file is not None: - logger.add( - file, - level=level, - format=log_format, - filter=lambda r: r["level"].no != WARN_ONCE_NO, - ) - - logger.info(f"Running on torch {torch.__version__}") - logger.info(f"Running on host {get_host()}") - commit = get_commit_hash() - if commit is not None: - logger.info(f"Git commit: {commit}, branch: {get_branch_name()}") - if (jobid := os.getenv("SLURM_JOB_ID")) is not None: - logger.info(f"Slurm jobid: {jobid}") - logger.level("WARNONCE", no=WARN_ONCE_NO, color="") - logger.add( - sys.stderr, - level=max(logger.level(level).no, WARN_ONCE_NO), - format=log_format, - filter=lambda r: r["level"].no == WARN_ONCE_NO and _duplicate_filter(r), - ) - logger.level("DEPRECATED", no=DEPRECATED_NO, color="") - logger.add( - sys.stderr, - level=max(logger.level(level).no, DEPRECATED_NO), - format=log_format, - filter=lambda r: r["level"].no == DEPRECATED_NO and _duplicate_filter(r), - ) - if model is not None: - logger.info("Loading model settings of {}", os.path.basename(model.rstrip("/"))) - _logger_initialized = True - - -def warn_once(message, *args, **kwargs): - logger.log("WARNONCE", message, *args, **kwargs) - - -def log_deprecated(message, *args, **kwargs): - logger.log("DEPRECATED", message, *args, **kwargs) - - -class Formatter: - def __init__(self, debug=False): - if debug: - self.fmt = ( - "{time:YYYY-MM-DD HH:mm:ss}" - " | {level: <8}" - " | {name}:{function}:{line}" - " | {message}" - ) - else: - self.fmt = ( - "{time:YYYY-MM-DD HH:mm:ss}" - " | {level: <8}" - " | DF" - " | {message}" - ) - self.fmt += "\n{exception}" - - def format(self, record): - if record["level"].no == WARN_ONCE_NO: - return self.fmt.replace("{level: <8}", "WARNING ") - return self.fmt - - -def _metrics_key(k_: Tuple[str, float]): - k0 = k_[0] - ks = k0.split("_") - if len(ks) > 2: - try: - return int(ks[-1]) - except ValueError: - return 1000 - elif k0 == "loss": - return -999 - elif "loss" in k0.lower(): - return -998 - elif k0 == "lr": - return 998 - elif k0 == "wd": - return 999 - else: - return -101 - - -def log_metrics(prefix: str, metrics: Dict[str, Number], level="INFO"): - msg = "" - stages = defaultdict(str) - loss_msg = "" - for n, v in sorted(metrics.items(), key=_metrics_key): - if abs(v) > 1e-3: - m = f" | {n}: {v:.5f}" - else: - m = f" | {n}: {v:.3E}" - if "stage" in n: - s = n.split("stage_")[1].split("_snr")[0] - stages[s] += m.replace(f"stage_{s}_", "") - elif ("valid" in prefix or "test" in prefix) and "loss" in n.lower(): - loss_msg += m - else: - msg += m - for s, msg_s in stages.items(): - logger.log(level, f"{prefix} | stage {s}" + msg_s) - if len(stages) == 0: - logger.log(level, prefix + msg) - if len(loss_msg) > 0: - logger.log(level, prefix + loss_msg) - - -class DuplicateFilter: - """ - Filters away duplicate log messages. - Modified version of: https://stackoverflow.com/a/60462619 - """ - - def __init__(self): - self.msgs = set() - - def __call__(self, record) -> bool: - k = f"{record['level']}{record['message']}" - if k in self.msgs: - return False - else: - self.msgs.add(k) - return True - - -_duplicate_filter = DuplicateFilter() - - -def log_model_summary(model: torch.nn.Module, verbose=False): - try: - import ptflops - except ImportError: - logger.debug("Failed to import ptflops. Cannot print model summary.") - return - - from df_local.model import ModelParams - - # Generate input of 1 second audio - # Necessary inputs are: - # spec: [B, 1, T, F, 2], F: freq bin - # feat_erb: [B, 1, T, E], E: ERB bands - # feat_spec: [B, 2, T, C*2], C: Complex features - p = ModelParams() - b = 1 - t = p.sr // p.hop_size - device = get_device() - spec = torch.randn([b, 1, t, p.fft_size // 2 + 1, 2]).to(device) - feat_erb = torch.randn([b, 1, t, p.nb_erb]).to(device) - feat_spec = torch.randn([b, 1, t, p.nb_df, 2]).to(device) - - warnings.filterwarnings("ignore", "RNN module weights", category=UserWarning, module="torch") - macs, params = ptflops.get_model_complexity_info( - deepcopy(model), - (t,), - input_constructor=lambda _: {"spec": spec, "feat_erb": feat_erb, "feat_spec": feat_spec}, - as_strings=False, - print_per_layer_stat=verbose, - verbose=verbose, - custom_modules_hooks={ - GroupedLinearEinsum: grouped_linear_flops_counter_hook, - }, - ) - logger.info(f"Model complexity: {params/1e6:.3f}M #Params, {macs/1e6:.1f}M MACS") - - -def grouped_linear_flops_counter_hook(module: GroupedLinearEinsum, input, output): - # input: ([B, T, I],) - # output: [B, T, H] - input = input[0] # [B, T, I] - output_last_dim = module.weight.shape[-1] - input = input.unflatten(-1, (module.groups, module.ws)) # [B, T, G, I/G] - # GroupedLinear calculates "...gi,...gih->...gh" - weight_flops = np.prod(input.shape) * output_last_dim - module.__flops__ += int(weight_flops) # type: ignore diff --git a/spaces/mikeee/langchain-llama2-7b-chat-uncensored-ggml/app.py b/spaces/mikeee/langchain-llama2-7b-chat-uncensored-ggml/app.py deleted file mode 100644 index cd31257a825ae4a198b723a192eed8e6357631b3..0000000000000000000000000000000000000000 --- a/spaces/mikeee/langchain-llama2-7b-chat-uncensored-ggml/app.py +++ /dev/null @@ -1,554 +0,0 @@ -"""Run codes.""" -# pylint: disable=line-too-long, broad-exception-caught, invalid-name, missing-function-docstring, too-many-instance-attributes, missing-class-docstring -# ruff: noqa: E501 -import gc -import os -import platform -import random -import time -from collections import deque -from pathlib import Path -from threading import Thread -from typing import Any, Dict, List, Union - -# from types import SimpleNamespace -import gradio as gr -import psutil -from about_time import about_time -from ctransformers import Config -from dl_hf_model import dl_hf_model -from langchain.callbacks.base import BaseCallbackHandler -from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler -from langchain.chains import ConversationChain -from langchain.chains.conversation.memory import ConversationBufferWindowMemory - -# from ctransformers import AutoModelForCausalLM -from langchain.llms import CTransformers -from langchain.prompts import PromptTemplate -from langchain.schema import LLMResult -from loguru import logger - -deq = deque() -sig_end = object() # signals the processing is done - -# from langchain.llms import OpenAI - -filename_list = [ - "Wizard-Vicuna-7B-Uncensored.ggmlv3.q2_K.bin", - "Wizard-Vicuna-7B-Uncensored.ggmlv3.q3_K_L.bin", - "Wizard-Vicuna-7B-Uncensored.ggmlv3.q3_K_M.bin", - "Wizard-Vicuna-7B-Uncensored.ggmlv3.q3_K_S.bin", - "Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_0.bin", - "Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_1.bin", - "Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_K_M.bin", - "Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_K_S.bin", - "Wizard-Vicuna-7B-Uncensored.ggmlv3.q5_0.bin", - "Wizard-Vicuna-7B-Uncensored.ggmlv3.q5_1.bin", - "Wizard-Vicuna-7B-Uncensored.ggmlv3.q5_K_M.bin", - "Wizard-Vicuna-7B-Uncensored.ggmlv3.q5_K_S.bin", - "Wizard-Vicuna-7B-Uncensored.ggmlv3.q6_K.bin", - "Wizard-Vicuna-7B-Uncensored.ggmlv3.q8_0.bin", -] - -URL = "https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-GGML/raw/main/Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_K_M.bin" # 4.05G - -url = "https://huggingface.co/savvamadar/ggml-gpt4all-j-v1.3-groovy/blob/main/ggml-gpt4all-j-v1.3-groovy.bin" -url = "https://huggingface.co/TheBloke/Llama-2-13B-GGML/blob/main/llama-2-13b.ggmlv3.q4_K_S.bin" # 7.37G -# url = "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/blob/main/llama-2-13b-chat.ggmlv3.q3_K_L.bin" -url = "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/blob/main/llama-2-13b-chat.ggmlv3.q3_K_L.bin" # 6.93G -# url = "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/blob/main/llama-2-13b-chat.ggmlv3.q3_K_L.binhttps://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/blob/main/llama-2-13b-chat.ggmlv3.q4_K_M.bin" # 7.87G - -url = "https://huggingface.co/localmodels/Llama-2-13B-Chat-ggml/blob/main/llama-2-13b-chat.ggmlv3.q4_K_S.bin" # 7.37G - -_ = ( - "golay" in platform.node() - or "okteto" in platform.node() - or Path("/kaggle").exists() - # or psutil.cpu_count(logical=False) < 4 - or 1 # run 7b in hf -) - -if _: - # url = "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/blob/main/llama-2-13b-chat.ggmlv3.q2_K.bin" - url = "https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/blob/main/llama-2-7b-chat.ggmlv3.q2_K.bin" # 2.87G - url = "https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/blob/main/llama-2-7b-chat.ggmlv3.q4_K_M.bin" # 2.87G - url = "https://huggingface.co/TheBloke/llama2_7b_chat_uncensored-GGML/blob/main/llama2_7b_chat_uncensored.ggmlv3.q4_K_M.bin" # 4.08G - - -prompt_template = """Below is an instruction that describes a task. Write a response that appropriately completes the request. - -### Instruction: {user_prompt} - -### Response: -""" - -prompt_template = """System: You are a helpful, -respectful and honest assistant. Always answer as -helpfully as possible, while being safe. Your answers -should not include any harmful, unethical, racist, -sexist, toxic, dangerous, or illegal content. Please -ensure that your responses are socially unbiased and -positive in nature. If a question does not make any -sense, or is not factually coherent, explain why instead -of answering something not correct. If you don't know -the answer to a question, please don't share false -information. -User: {prompt} -Assistant: """ - -prompt_template = """System: You are a helpful assistant. -User: {prompt} -Assistant: """ - -prompt_template = """Question: {question} -Answer: Let's work this out in a step by step way to be sure we have the right answer.""" - -prompt_template = """[INST] <> -You are a helpful, respectful and honest assistant. Always answer as helpfully as possible assistant. Think step by step. -<> - -What NFL team won the Super Bowl in the year Justin Bieber was born? -[/INST]""" - -prompt_template = """[INST] <> -You are an unhelpful assistant. Always answer as helpfully as possible. Think step by step. <> - -{question} [/INST] -""" - -prompt_template = """[INST] <> -You are a helpful assistant. -<> - -{question} [/INST] -""" - -prompt_template = """### HUMAN: -{question} - -### RESPONSE:""" - -prompt_template = """### HUMAN: -You are a helpful assistant. Think step by step. -{history} -{input} -### RESPONSE:""" - -prompt_template = """You are a helpful assistant. Let's think step by step. -{history} -### HUMAN: -{input} -### RESPONSE:""" - -# PromptTemplate(input_variables=['history', 'input'], output_parser=None, partial_variables={}, template='The following is afriendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n{history}\nHuman: {input}\nAI:', template_format='f-string', validate_template=True) - -human_prefix = "### HUMAN" -ai_prefix = "### RESPONSE" -stop = [f"{human_prefix}:"] - -_ = [elm for elm in prompt_template.splitlines() if elm.strip()] -stop_string = [elm.split(":")[0] + ":" for elm in _][-2] - -# logger.debug(f"{stop_string=} not used") - -os.environ["TZ"] = "Asia/Shanghai" -try: - time.tzset() # type: ignore # pylint: disable=no-member -except Exception: - # Windows - logger.warning("Windows, cant run time.tzset()") - - -class DequeCallbackHandler(BaseCallbackHandler): - """Mediate gradio and stream output.""" - - def __init__(self, deq_: deque): - """Init deque for FIFO, may need to upgrade to queue.Queue or queue.SimpleQueue.""" - self.q = deq_ - - # def on_chat_model_start(self): self.q.clear() - - def on_llm_start( - self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any - ) -> None: - """Run when LLM starts running. Clean the queue.""" - self.q.clear() - - def on_llm_new_token(self, token: str, **kwargs: Any) -> None: - """Run on new LLM token. Only available when streaming is enabled.""" - self.q.append(token) - - def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None: - """Run when LLM ends running.""" - self.q.append(sig_end) - - def on_llm_error( - self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any - ) -> None: - """Run when LLM errors.""" - self.q.append(sig_end) - - -_ = psutil.cpu_count(logical=False) - 1 -cpu_count: int = int(_) if _ else 1 -logger.debug(f"{cpu_count=}") - -LLM = None -gc.collect() - -try: - model_loc, file_size = dl_hf_model(url) -except Exception as exc_: - logger.error(exc_) - raise SystemExit(1) from exc_ - -config = Config() -# Config(top_k=40, top_p=0.95, temperature=0.8, repetition_penalty=1.1, last_n_tokens=64, seed=-1, batch_size=8, threads=-1, max_new_tokens=256, stop=None, stream=False, reset=True, context_length=-1, gpu_layers=0) -config.stream = True -config.stop = stop -config.threads = cpu_count - -deqcb = DequeCallbackHandler(deq) - -# LLM = AutoModelForCausalLM.from_pretrained( -LLM = CTransformers( - model=model_loc, - model_type="llama", - callbacks=[StreamingStdOutCallbackHandler(), deqcb], - # config=config, - **vars(config), -) - -logger.info(f"done load llm {model_loc=} {file_size=}G") - -prompt = PromptTemplate( - input_variables=["history", "input"], - output_parser=None, - partial_variables={}, - template=prompt_template, - template_format="f-string", - validate_template=True, -) - -memory = ConversationBufferWindowMemory( - human_prefix=human_prefix, - ai_prefix=ai_prefix, -) # default k=5 - -conversation = ConversationChain( - llm=LLM, - prompt=prompt, - memory=memory, - verbose=True, -) -logger.debug(f"{conversation.prompt.template=}") # type: ignore - -# for api access === -config = Config() -# Config(top_k=40, top_p=0.95, temperature=0.8, repetition_penalty=1.1, last_n_tokens=64, seed=-1, batch_size=8, threads=-1, max_new_tokens=256, stop=None, stream=False, reset=True, context_length=-1, gpu_layers=0) -config.stop = stop -config.threads = cpu_count - -try: - LLM_api = CTransformers( - model=model_loc, - model_type="llama", - # callbacks=[StreamingStdOutCallbackHandler(), deqcb], - callbacks=[StreamingStdOutCallbackHandler()], - **vars(config), - ) - conversation_api = ConversationChain( - llm=LLM_api, # need a separate LLM, or else deq may be messed up - prompt=prompt, - verbose=True, - ) -except Exception as exc_: - logger.error(exc_) - conversation_api = None - logger.warning("Not able to instantiate conversation_api, api will not work") - -# conversation.predict(input="Hello, my name is Andrea") - - -def user(user_message, history): - # return user_message, history + [[user_message, None]] - history.append([user_message, None]) - return user_message, history # keep user_message - - -def user1(user_message, history): - # return user_message, history + [[user_message, None]] - history.append([user_message, None]) - return "", history # clear user_message - - -def bot_(history): - user_message = history[-1][0] - resp = random.choice(["How are you?", "I love you", "I'm very hungry"]) - bot_message = user_message + ": " + resp - history[-1][1] = "" - for character in bot_message: - history[-1][1] += character - time.sleep(0.02) - yield history - - history[-1][1] = resp - yield history - - -def bot(history): - user_message = history[-1][0] - response = [] - - logger.debug(f"{user_message=}") - - # conversation.predict(input="What's my name?") - thr = Thread(target=conversation.predict, kwargs={"input": user_message}) - thr.start() - - # preocess deq - response = [] - flag = 1 - then = time.time() - prefix = "" # to please pyright - with about_time() as atime: # type: ignore - while True: - if deq: - if flag: - prefix = f"({time.time() - then:.2f}s) " - flag = 0 - _ = deq.popleft() - if _ is sig_end: - break - # print(_, end='') - response.append(_) - history[-1][1] = prefix + "".join(response).strip() - yield history - else: - time.sleep(0.01) - _ = ( - f"(time elapsed: {atime.duration_human}, " # type: ignore - f"{atime.duration/len(''.join(response)):.2f}s/char)" # type: ignore - ) - - history[-1][1] = "".join(response) + f"\n{_}" - yield history - - -def predict_api(user_prompt): - if conversation_api is None: - return "conversation_api is None, probably due to insufficient memory, api not usable" - - logger.debug(f"api: {user_prompt=}") - try: - _ = """ - response = generate( - prompt, - config=config, - ) - # """ - response = conversation_api.predict(input=user_prompt) - logger.debug(f"api: {response=}") - except Exception as exc: - logger.error(exc) - response = f"{exc=}" - # bot = {"inputs": [response]} - # bot = [(prompt, response)] - - return response.strip() - - -css = """ - .importantButton { - background: linear-gradient(45deg, #7e0570,#5d1c99, #6e00ff) !important; - border: none !important; - } - .importantButton:hover { - background: linear-gradient(45deg, #ff00e0,#8500ff, #6e00ff) !important; - border: none !important; - } - .disclaimer {font-variant-caps: all-small-caps; font-size: xx-small;} - .xsmall {font-size: x-small;} -""" -etext = """In America, where cars are an important part of the national psyche, a decade ago people had suddenly started to drive less, which had not happened since the oil shocks of the 1970s. """ -examples_list = [ - ["Hello I am mike."], - ["What's my name?"], - ["What NFL team won the Super Bowl in the year Justin Bieber was born?"], - [ - "What NFL team won the Super Bowl in the year Justin Bieber was born? Think step by step." - ], - ["When was Justin Bieber born?"], - ["What NFL team won the Super Bowl in 1994?"], - ["How to pick a lock? Provide detailed steps."], - [ - "If it takes 10 hours to dry 10 clothes, assuming all the clothes are hanged together at the same time for drying , then how long will it take to dry a cloth?" - ], - ["is infinity + 1 bigger than infinity?"], - ["Explain the plot of Cinderella in a sentence."], - [ - "How long does it take to become proficient in French, and what are the best methods for retaining information?" - ], - ["What are some common mistakes to avoid when writing code?"], - ["Build a prompt to generate a beautiful portrait of a horse"], - ["Suggest four metaphors to describe the benefits of AI"], - ["Write a pop song about leaving home for the sandy beaches."], - ["Write a pop song about having hot sex on a sandy beach."], - ["Write a summary demonstrating my ability to tame lions"], - ["鲁迅和周树人什么关系? 说中文。"], - ["鲁迅和周树人什么关系?"], - ["鲁迅和周树人什么关系? 用英文回答。"], - ["从前有一头牛,这头牛后面有什么?"], - ["正无穷大加一大于正无穷大吗?"], - ["正无穷大加正无穷大大于正无穷大吗?"], - ["-2的平方根等于什么?"], - ["树上有5只鸟,猎人开枪打死了一只。树上还有几只鸟?"], - ["树上有11只鸟,猎人开枪打死了一只。树上还有几只鸟?提示:需考虑鸟可能受惊吓飞走。"], - ["以红楼梦的行文风格写一张委婉的请假条。不少于320字。"], - [f"{etext} 翻成中文,列出3个版本。"], - [f"{etext} \n 翻成中文,保留原意,但使用文学性的语言。不要写解释。列出3个版本。"], - ["假定 1 + 2 = 4, 试求 7 + 8。"], - ["给出判断一个数是不是质数的 javascript 码。"], - ["给出实现python 里 range(10)的 javascript 码。"], - ["给出实现python 里 [*(range(10)]的 javascript 码。"], - ["Erkläre die Handlung von Cinderella in einem Satz."], - ["Erkläre die Handlung von Cinderella in einem Satz. Auf Deutsch."], -] - -logger.info("start block") - -with gr.Blocks( - title=f"{Path(model_loc).name}", - theme=gr.themes.Soft(text_size="sm", spacing_size="sm"), - css=css, -) as block: - # buff_var = gr.State("") - with gr.Accordion("🎈 Info", open=False): - # gr.HTML( - # """
Duplicate and spin a CPU UPGRADE to avoid the queue
""" - # ) - gr.Markdown( - f"""
{Path(model_loc).name}
- The bot can conduct multi-turn conversations, i.e. it remembers past dialogs. The process time is longer. - It typically takes about 120 seconds for the first response to appear. - - Most examples are meant for another model. - You probably should try to test - some related prompts.""", - elem_classes="xsmall", - ) - - chatbot = gr.Chatbot(height=500) - - with gr.Row(): - with gr.Column(scale=5): - msg = gr.Textbox( - label="Chat Message Box", - placeholder="Ask me anything (press Shift+Enter or click Submit to send)", - show_label=False, - # container=False, - lines=6, - max_lines=30, - show_copy_button=True, - # ).style(container=False) - ) - with gr.Column(scale=1, min_width=50): - with gr.Row(): - submit = gr.Button("Submit", elem_classes="xsmall") - stop = gr.Button("Stop", visible=True) - clear = gr.Button("Clear History", visible=True) - with gr.Row(visible=False): - with gr.Accordion("Advanced Options:", open=False): - with gr.Row(): - with gr.Column(scale=2): - system = gr.Textbox( - label="System Prompt", - value=prompt_template, - show_label=False, - container=False, - # ).style(container=False) - ) - with gr.Column(): - with gr.Row(): - change = gr.Button("Change System Prompt") - reset = gr.Button("Reset System Prompt") - - with gr.Accordion("Example Inputs", open=True): - examples = gr.Examples( - examples=examples_list, - inputs=[msg], - examples_per_page=40, - ) - - with gr.Accordion("Disclaimer", open=False): - _ = Path(model_loc).name - gr.Markdown( - f"Disclaimer: {_} can produce factually incorrect output, and should not be relied on to produce " - "factually accurate information. {_} was trained on various public datasets; while great efforts " - "have been taken to clean the pretraining data, it is possible that this model could generate lewd, " - "biased, or otherwise offensive outputs.", - elem_classes=["disclaimer"], - ) - - msg_submit_event = msg.submit( - # fn=conversation.user_turn, - fn=user, - inputs=[msg, chatbot], - outputs=[msg, chatbot], - queue=True, - show_progress="full", - # api_name=None, - ).then(bot, chatbot, chatbot, queue=True) - submit_click_event = submit.click( - # fn=lambda x, y: ("",) + user(x, y)[1:], # clear msg - fn=user1, # clear msg - inputs=[msg, chatbot], - outputs=[msg, chatbot], - queue=True, - # queue=False, - show_progress="full", - # api_name=None, - ).then(bot, chatbot, chatbot, queue=True) - stop.click( - fn=None, - inputs=None, - outputs=None, - cancels=[msg_submit_event, submit_click_event], - queue=False, - ) - - # TODO: clear conversation memory as well - clear.click(lambda: None, None, chatbot, queue=False) - - with gr.Accordion("For Chat/Translation API", open=False, visible=False): - input_text = gr.Text() - api_btn = gr.Button("Go", variant="primary") - out_text = gr.Text() - - if conversation_api is not None: - api_btn.click( - predict_api, - input_text, - out_text, - api_name="api", - ) - -# concurrency_count=5, max_size=20 -# max_size=36, concurrency_count=14 -# CPU cpu_count=2 16G, model 7G -# CPU UPGRADE cpu_count=8 32G, model 7G - -# does not work -_ = """ -# _ = int(psutil.virtual_memory().total / 10**9 // file_size - 1) -# concurrency_count = max(_, 1) -if psutil.cpu_count(logical=False) >= 8: - # concurrency_count = max(int(32 / file_size) - 1, 1) -else: - # concurrency_count = max(int(16 / file_size) - 1, 1) -# """ - -concurrency_count = 1 -logger.info(f"{concurrency_count=}") - -block.queue(concurrency_count=concurrency_count, max_size=5).launch(debug=True) diff --git a/spaces/mikeee/ttw/gradiobee/en2zh.py b/spaces/mikeee/ttw/gradiobee/en2zh.py deleted file mode 100644 index d590013ea280def60ca971bd34046bf38b8524e1..0000000000000000000000000000000000000000 --- a/spaces/mikeee/ttw/gradiobee/en2zh.py +++ /dev/null @@ -1,40 +0,0 @@ -"""Translate english to chinese via a dict.""" -from typing import List, Union - -import warnings - -import copy -from gradiobee.mdx_e2c import mdx_e2c - -warnings.simplefilter('ignore', DeprecationWarning) - - -# fmt: off -def en2zh( - # text: Union[str, List[List[str]]], - text: Union[str, List[str]], -) -> List[str]: - # fmt: on - """Translate english to chinese via a dict. - - Args - text: to translate, list of str - - Returns - res: list of str - """ - res = copy.deepcopy(text) - if isinstance(text, str): - # res = [text.split()] - res = [text] - - # if res and isinstance(res[0], str): - # res = [line.lower().split() for line in res] - - # res = ["".join([word_tr(word) for word in line]) for line in res] - _ = [] - for line in res: - line_tr = [mdx_e2c(word) for word in line.split()] - _.append("".join(line_tr)) - - return _ diff --git a/spaces/mingyuan/MotionDiffuse/trainers/ddpm_trainer.py b/spaces/mingyuan/MotionDiffuse/trainers/ddpm_trainer.py deleted file mode 100644 index 1ea113cd20f87acc8d31f1551c808cc20a835e9f..0000000000000000000000000000000000000000 --- a/spaces/mingyuan/MotionDiffuse/trainers/ddpm_trainer.py +++ /dev/null @@ -1,222 +0,0 @@ -import torch -import torch.nn.functional as F -import random -import time -from models.transformer import MotionTransformer -from torch.utils.data import DataLoader -import torch.optim as optim -from torch.nn.utils import clip_grad_norm_ -from collections import OrderedDict -from utils.utils import print_current_loss -from os.path import join as pjoin -import codecs as cs -import torch.distributed as dist - - -from mmcv.runner import get_dist_info -from models.gaussian_diffusion import ( - GaussianDiffusion, - get_named_beta_schedule, - create_named_schedule_sampler, - ModelMeanType, - ModelVarType, - LossType -) - -from datasets import build_dataloader - - -class DDPMTrainer(object): - - def __init__(self, args, encoder): - self.opt = args - self.device = args.device - self.encoder = encoder - self.diffusion_steps = args.diffusion_steps - sampler = 'uniform' - beta_scheduler = 'linear' - betas = get_named_beta_schedule(beta_scheduler, self.diffusion_steps) - self.diffusion = GaussianDiffusion( - betas=betas, - model_mean_type=ModelMeanType.EPSILON, - model_var_type=ModelVarType.FIXED_SMALL, - loss_type=LossType.MSE - ) - self.sampler = create_named_schedule_sampler(sampler, self.diffusion) - self.sampler_name = sampler - - if args.is_train: - self.mse_criterion = torch.nn.MSELoss(reduction='none') - self.to(self.device) - - @staticmethod - def zero_grad(opt_list): - for opt in opt_list: - opt.zero_grad() - - @staticmethod - def clip_norm(network_list): - for network in network_list: - clip_grad_norm_(network.parameters(), 0.5) - - @staticmethod - def step(opt_list): - for opt in opt_list: - opt.step() - - def forward(self, batch_data, eval_mode=False): - caption, motions, m_lens = batch_data - motions = motions.detach().to(self.device).float() - - self.caption = caption - self.motions = motions - x_start = motions - B, T = x_start.shape[:2] - cur_len = torch.LongTensor([min(T, m_len) for m_len in m_lens]).to(self.device) - t, _ = self.sampler.sample(B, x_start.device) - output = self.diffusion.training_losses( - model=self.encoder, - x_start=x_start, - t=t, - model_kwargs={"text": caption, "length": cur_len} - ) - - self.real_noise = output['target'] - self.fake_noise = output['pred'] - try: - self.src_mask = self.encoder.module.generate_src_mask(T, cur_len).to(x_start.device) - except: - self.src_mask = self.encoder.generate_src_mask(T, cur_len).to(x_start.device) - - def generate_batch(self, caption, m_lens, dim_pose): - xf_proj, xf_out = self.encoder.encode_text(caption, self.device) - - B = len(caption) - T = min(m_lens.max(), self.encoder.num_frames) - output = self.diffusion.p_sample_loop( - self.encoder, - (B, T, dim_pose), - clip_denoised=False, - progress=True, - model_kwargs={ - 'xf_proj': xf_proj, - 'xf_out': xf_out, - 'length': m_lens - }) - return output - - def generate(self, caption, m_lens, dim_pose, batch_size=1024): - N = len(caption) - cur_idx = 0 - self.encoder.eval() - all_output = [] - while cur_idx < N: - if cur_idx + batch_size >= N: - batch_caption = caption[cur_idx:] - batch_m_lens = m_lens[cur_idx:] - else: - batch_caption = caption[cur_idx: cur_idx + batch_size] - batch_m_lens = m_lens[cur_idx: cur_idx + batch_size] - output = self.generate_batch(batch_caption, batch_m_lens, dim_pose) - B = output.shape[0] - - for i in range(B): - all_output.append(output[i]) - cur_idx += batch_size - return all_output - - def backward_G(self): - loss_mot_rec = self.mse_criterion(self.fake_noise, self.real_noise).mean(dim=-1) - loss_mot_rec = (loss_mot_rec * self.src_mask).sum() / self.src_mask.sum() - self.loss_mot_rec = loss_mot_rec - loss_logs = OrderedDict({}) - loss_logs['loss_mot_rec'] = self.loss_mot_rec.item() - return loss_logs - - def update(self): - self.zero_grad([self.opt_encoder]) - loss_logs = self.backward_G() - self.loss_mot_rec.backward() - self.clip_norm([self.encoder]) - self.step([self.opt_encoder]) - - return loss_logs - - def to(self, device): - if self.opt.is_train: - self.mse_criterion.to(device) - self.encoder = self.encoder.to(device) - - def train_mode(self): - self.encoder.train() - - def eval_mode(self): - self.encoder.eval() - - def save(self, file_name, ep, total_it): - state = { - 'opt_encoder': self.opt_encoder.state_dict(), - 'ep': ep, - 'total_it': total_it - } - try: - state['encoder'] = self.encoder.module.state_dict() - except: - state['encoder'] = self.encoder.state_dict() - torch.save(state, file_name) - return - - def load(self, model_dir): - checkpoint = torch.load(model_dir, map_location=self.device) - if self.opt.is_train: - self.opt_encoder.load_state_dict(checkpoint['opt_encoder']) - self.encoder.load_state_dict(checkpoint['encoder'], strict=True) - return checkpoint['ep'], checkpoint.get('total_it', 0) - - def train(self, train_dataset): - rank, world_size = get_dist_info() - self.to(self.device) - self.opt_encoder = optim.Adam(self.encoder.parameters(), lr=self.opt.lr) - it = 0 - cur_epoch = 0 - if self.opt.is_continue: - model_dir = pjoin(self.opt.model_dir, 'latest.tar') - cur_epoch, it = self.load(model_dir) - - start_time = time.time() - - train_loader = build_dataloader( - train_dataset, - samples_per_gpu=self.opt.batch_size, - drop_last=True, - workers_per_gpu=4, - shuffle=True) - - logs = OrderedDict() - for epoch in range(cur_epoch, self.opt.num_epochs): - self.train_mode() - for i, batch_data in enumerate(train_loader): - self.forward(batch_data) - log_dict = self.update() - for k, v in log_dict.items(): - if k not in logs: - logs[k] = v - else: - logs[k] += v - it += 1 - if it % self.opt.log_every == 0 and rank == 0: - mean_loss = OrderedDict({}) - for tag, value in logs.items(): - mean_loss[tag] = value / self.opt.log_every - logs = OrderedDict() - print_current_loss(start_time, it, mean_loss, epoch, inner_iter=i) - - if it % self.opt.save_latest == 0 and rank == 0: - self.save(pjoin(self.opt.model_dir, 'latest.tar'), epoch, it) - - if rank == 0: - self.save(pjoin(self.opt.model_dir, 'latest.tar'), epoch, it) - - if epoch % self.opt.save_every_e == 0 and rank == 0: - self.save(pjoin(self.opt.model_dir, 'ckpt_e%03d.tar'%(epoch)), - epoch, total_it=it) diff --git a/spaces/ml6team/distilbart-tos-summarizer-tosdr/app.py b/spaces/ml6team/distilbart-tos-summarizer-tosdr/app.py deleted file mode 100644 index 7e94ccb0b25c4030864081de48d8e1e995fd3298..0000000000000000000000000000000000000000 --- a/spaces/ml6team/distilbart-tos-summarizer-tosdr/app.py +++ /dev/null @@ -1,137 +0,0 @@ -import html -import os -from typing import AnyStr - -import nltk -import streamlit as st -import validators -from transformers import pipeline -from validators import ValidationFailure - -from Summarizer import Summarizer - - -def main() -> None: - nltk.download('punkt') - - st.markdown('# Terms & Conditions Summarizer :pencil:') - st.markdown('Do you also always take the time out of your day to thoroughly read every word of the Terms & Conditions before signing up to an app like the responsible citizen that you are? :thinking_face:
' - 'No?
' - "Well don't worry, neither do we! That's why we created a Terms & Conditions Summarization algorithm!", unsafe_allow_html=True) - st.markdown('Just copy-paste that pesky Terms & Conditions text and let our fancy NLP algorithm do the rest!
' - 'You will see both an extractive summary (the most important sentences will be highlighted) and an abstractive summary (an actual summary)
' - 'The abstractive summary will give you an idea of what the key message of the document likely is :bulb:', unsafe_allow_html=True) - st.markdown('Want to find out more? :brain:
' - 'For details about the extractive part :point_right: https://en.wikipedia.org/wiki/Latent_semantic_analysis
' - 'For details about the abstractive part :point_right: https://huggingface.co/ml6team/distilbart-tos-summarizer-tosdr', unsafe_allow_html=True) - - @st.cache(allow_output_mutation=True, - suppress_st_warning=True, - show_spinner=False) - def create_pipeline(): - with st.spinner('Please wait for the model to load...'): - terms_and_conditions_pipeline = pipeline( - task='summarization', - model='ml6team/distilbart-tos-summarizer-tosdr', - tokenizer='ml6team/distilbart-tos-summarizer-tosdr' - ) - return terms_and_conditions_pipeline - - def display_abstractive_summary(summary_sentences: list) -> None: - st.subheader("Abstractive Summary") - st.markdown('#####') - for sentence in summary_sentences: - st.markdown(f"- {sentence}", unsafe_allow_html=True) - - def display_extractive_summary(terms_and_conditions_text: str, summary_sentences: list) -> None: - st.subheader("Extractive Summary") - st.markdown('#####') - replaced_text = html.escape(terms_and_conditions_text) - for sentence in summary_sentences: - escaped_sentence = html.escape(sentence) - replaced_text = replaced_text.replace(escaped_sentence, - f"

" - f"{escaped_sentence}" - f"

") - replaced_text = replaced_text.replace('\n', '
') - with st.container(): - st.write(f"

{replaced_text}

", unsafe_allow_html=True) - - def is_valid_url(url: str) -> bool: - result = validators.url(url) - if isinstance(result, ValidationFailure): - return False - return True - - def list_all_filenames() -> list: - filenames = [] - for file in os.listdir('./sample-terms-and-conditions/'): - if file.endswith('.txt'): - filenames.append(file.replace('.txt', '')) - return filenames - - def fetch_file_contents(filename: str) -> AnyStr: - with open(f'./sample-terms-and-conditions/{filename.lower()}.txt', 'r') as f: - data = f.read() - return data - - summarizer: Summarizer = Summarizer(create_pipeline()) - - if 'tc_text' not in st.session_state: - st.session_state['tc_text'] = '' - - if 'sentences_length' not in st.session_state: - st.session_state['sentences_length'] = Summarizer.DEFAULT_EXTRACTED_ARTICLE_SENTENCES_LENGTH - - if 'sample_choice' not in st.session_state: - st.session_state['sample_choice'] = '' - - st.header("Input") - - sentences_length = st.number_input( - label='Number of sentences to be extracted:', - min_value=5, - max_value=15, - value=st.session_state.sentences_length - ) - sample_choice = st.selectbox( - 'Choose a sample terms & conditions:', - list_all_filenames()) - st.session_state.tc_text = fetch_file_contents(sample_choice) - tc_text_input = st.text_area( - value=st.session_state.tc_text, - label='Terms & conditions content or paste your own T&C:', - height=240 - ) - - summarize_button = st.button(label='Summarize') - - @st.cache(suppress_st_warning=True, - show_spinner=False, - allow_output_mutation=True, - hash_funcs={"torch.nn.parameter.Parameter": lambda _: None, - "tokenizers.Tokenizer": lambda _: None, - "tokenizers.AddedToken": lambda _: None, - }) - def abstractive_summary_from_cache(summary_sentences: tuple) -> tuple: - with st.spinner('Summarizing the text is in progress...'): - return tuple(summarizer.abstractive_summary(list(summary_sentences))) - - if summarize_button: - - if is_valid_url(tc_text_input): - extract_summary_sentences = summarizer.extractive_summary_from_url(tc_text_input, sentences_length) - else: - extract_summary_sentences = summarizer.extractive_summary_from_text(tc_text_input, sentences_length) - - extract_summary_sentences_tuple = tuple(extract_summary_sentences) - abstract_summary_tuple = abstractive_summary_from_cache(extract_summary_sentences_tuple) - abstract_summary_list = list(abstract_summary_tuple) - - display_abstractive_summary(abstract_summary_list) - display_extractive_summary(tc_text_input, extract_summary_sentences) - - -if __name__ == "__main__": - main() - diff --git a/spaces/molok3/alea31415-onimai-characters/app.py b/spaces/molok3/alea31415-onimai-characters/app.py deleted file mode 100644 index 82f3ce894bbe01723fd04c1e6532bd765052d78e..0000000000000000000000000000000000000000 --- a/spaces/molok3/alea31415-onimai-characters/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/alea31415/onimai-characters").launch() \ No newline at end of file diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py b/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py deleted file mode 100644 index 7a7696403d505afdf0f1606f8220801b0f46152f..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py +++ /dev/null @@ -1,311 +0,0 @@ -# ***************************************************************************** -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are met: -# * Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in the -# documentation and/or other materials provided with the distribution. -# * Neither the name of the NVIDIA CORPORATION nor the -# names of its contributors may be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY -# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND -# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -# -# ***************************************************************************** -import copy -import torch -from torch.autograd import Variable -import torch.nn.functional as F - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a+input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -class WaveGlowLoss(torch.nn.Module): - def __init__(self, sigma=1.0): - super(WaveGlowLoss, self).__init__() - self.sigma = sigma - - def forward(self, model_output): - z, log_s_list, log_det_W_list = model_output - for i, log_s in enumerate(log_s_list): - if i == 0: - log_s_total = torch.sum(log_s) - log_det_W_total = log_det_W_list[i] - else: - log_s_total = log_s_total + torch.sum(log_s) - log_det_W_total += log_det_W_list[i] - - loss = torch.sum(z*z)/(2*self.sigma*self.sigma) - log_s_total - log_det_W_total - return loss/(z.size(0)*z.size(1)*z.size(2)) - - -class Invertible1x1Conv(torch.nn.Module): - """ - The layer outputs both the convolution, and the log determinant - of its weight matrix. If reverse=True it does convolution with - inverse - """ - def __init__(self, c): - super(Invertible1x1Conv, self).__init__() - self.conv = torch.nn.Conv1d(c, c, kernel_size=1, stride=1, padding=0, - bias=False) - - # Sample a random orthonormal matrix to initialize weights - W = torch.qr(torch.FloatTensor(c, c).normal_())[0] - - # Ensure determinant is 1.0 not -1.0 - if torch.det(W) < 0: - W[:,0] = -1*W[:,0] - W = W.view(c, c, 1) - self.conv.weight.data = W - - def forward(self, z, reverse=False): - # shape - batch_size, group_size, n_of_groups = z.size() - - W = self.conv.weight.squeeze() - - if reverse: - if not hasattr(self, 'W_inverse'): - # Reverse computation - W_inverse = W.float().inverse() - W_inverse = Variable(W_inverse[..., None]) - if z.type() == 'torch.cuda.HalfTensor': - W_inverse = W_inverse.half() - self.W_inverse = W_inverse - z = F.conv1d(z, self.W_inverse, bias=None, stride=1, padding=0) - return z - else: - # Forward computation - log_det_W = batch_size * n_of_groups * torch.logdet(W) - z = self.conv(z) - return z, log_det_W - - -class WN(torch.nn.Module): - """ - This is the WaveNet like layer for the affine coupling. The primary difference - from WaveNet is the convolutions need not be causal. There is also no dilation - size reset. The dilation only doubles on each layer - """ - def __init__(self, n_in_channels, n_mel_channels, n_layers, n_channels, - kernel_size): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - assert(n_channels % 2 == 0) - self.n_layers = n_layers - self.n_channels = n_channels - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - - start = torch.nn.Conv1d(n_in_channels, n_channels, 1) - start = torch.nn.utils.weight_norm(start, name='weight') - self.start = start - - # Initializing last layer to 0 makes the affine coupling layers - # do nothing at first. This helps with training stability - end = torch.nn.Conv1d(n_channels, 2*n_in_channels, 1) - end.weight.data.zero_() - end.bias.data.zero_() - self.end = end - - cond_layer = torch.nn.Conv1d(n_mel_channels, 2*n_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = 2 ** i - padding = int((kernel_size*dilation - dilation)/2) - in_layer = torch.nn.Conv1d(n_channels, 2*n_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2*n_channels - else: - res_skip_channels = n_channels - res_skip_layer = torch.nn.Conv1d(n_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, forward_input): - audio, spect = forward_input - audio = self.start(audio) - output = torch.zeros_like(audio) - n_channels_tensor = torch.IntTensor([self.n_channels]) - - spect = self.cond_layer(spect) - - for i in range(self.n_layers): - spect_offset = i*2*self.n_channels - acts = fused_add_tanh_sigmoid_multiply( - self.in_layers[i](audio), - spect[:,spect_offset:spect_offset+2*self.n_channels,:], - n_channels_tensor) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - audio = audio + res_skip_acts[:,:self.n_channels,:] - output = output + res_skip_acts[:,self.n_channels:,:] - else: - output = output + res_skip_acts - - return self.end(output) - - -class WaveGlow(torch.nn.Module): - def __init__(self, n_mel_channels, n_flows, n_group, n_early_every, - n_early_size, WN_config): - super(WaveGlow, self).__init__() - - self.upsample = torch.nn.ConvTranspose1d(n_mel_channels, - n_mel_channels, - 1024, stride=256) - assert(n_group % 2 == 0) - self.n_flows = n_flows - self.n_group = n_group - self.n_early_every = n_early_every - self.n_early_size = n_early_size - self.WN = torch.nn.ModuleList() - self.convinv = torch.nn.ModuleList() - - n_half = int(n_group/2) - - # Set up layers with the right sizes based on how many dimensions - # have been output already - n_remaining_channels = n_group - for k in range(n_flows): - if k % self.n_early_every == 0 and k > 0: - n_half = n_half - int(self.n_early_size/2) - n_remaining_channels = n_remaining_channels - self.n_early_size - self.convinv.append(Invertible1x1Conv(n_remaining_channels)) - self.WN.append(WN(n_half, n_mel_channels*n_group, **WN_config)) - self.n_remaining_channels = n_remaining_channels # Useful during inference - - def forward(self, forward_input): - """ - forward_input[0] = mel_spectrogram: batch x n_mel_channels x frames - forward_input[1] = audio: batch x time - """ - spect, audio = forward_input - - # Upsample spectrogram to size of audio - spect = self.upsample(spect) - assert(spect.size(2) >= audio.size(1)) - if spect.size(2) > audio.size(1): - spect = spect[:, :, :audio.size(1)] - - spect = spect.unfold(2, self.n_group, self.n_group).permute(0, 2, 1, 3) - spect = spect.contiguous().view(spect.size(0), spect.size(1), -1).permute(0, 2, 1) - - audio = audio.unfold(1, self.n_group, self.n_group).permute(0, 2, 1) - output_audio = [] - log_s_list = [] - log_det_W_list = [] - - for k in range(self.n_flows): - if k % self.n_early_every == 0 and k > 0: - output_audio.append(audio[:,:self.n_early_size,:]) - audio = audio[:,self.n_early_size:,:] - - audio, log_det_W = self.convinv[k](audio) - log_det_W_list.append(log_det_W) - - n_half = int(audio.size(1)/2) - audio_0 = audio[:,:n_half,:] - audio_1 = audio[:,n_half:,:] - - output = self.WN[k]((audio_0, spect)) - log_s = output[:, n_half:, :] - b = output[:, :n_half, :] - audio_1 = torch.exp(log_s)*audio_1 + b - log_s_list.append(log_s) - - audio = torch.cat([audio_0, audio_1],1) - - output_audio.append(audio) - return torch.cat(output_audio,1), log_s_list, log_det_W_list - - def infer(self, spect, sigma=1.0): - spect = self.upsample(spect) - # trim conv artifacts. maybe pad spec to kernel multiple - time_cutoff = self.upsample.kernel_size[0] - self.upsample.stride[0] - spect = spect[:, :, :-time_cutoff] - - spect = spect.unfold(2, self.n_group, self.n_group).permute(0, 2, 1, 3) - spect = spect.contiguous().view(spect.size(0), spect.size(1), -1).permute(0, 2, 1) - - if spect.type() == 'torch.cuda.HalfTensor': - audio = torch.cuda.HalfTensor(spect.size(0), - self.n_remaining_channels, - spect.size(2)).normal_() - else: - audio = torch.cuda.FloatTensor(spect.size(0), - self.n_remaining_channels, - spect.size(2)).normal_() - - audio = torch.autograd.Variable(sigma*audio) - - for k in reversed(range(self.n_flows)): - n_half = int(audio.size(1)/2) - audio_0 = audio[:,:n_half,:] - audio_1 = audio[:,n_half:,:] - - output = self.WN[k]((audio_0, spect)) - - s = output[:, n_half:, :] - b = output[:, :n_half, :] - audio_1 = (audio_1 - b)/torch.exp(s) - audio = torch.cat([audio_0, audio_1],1) - - audio = self.convinv[k](audio, reverse=True) - - if k % self.n_early_every == 0 and k > 0: - if spect.type() == 'torch.cuda.HalfTensor': - z = torch.cuda.HalfTensor(spect.size(0), self.n_early_size, spect.size(2)).normal_() - else: - z = torch.cuda.FloatTensor(spect.size(0), self.n_early_size, spect.size(2)).normal_() - audio = torch.cat((sigma*z, audio),1) - - audio = audio.permute(0,2,1).contiguous().view(audio.size(0), -1).data - return audio - - @staticmethod - def remove_weightnorm(model): - waveglow = model - for WN in waveglow.WN: - WN.start = torch.nn.utils.remove_weight_norm(WN.start) - WN.in_layers = remove(WN.in_layers) - WN.cond_layer = torch.nn.utils.remove_weight_norm(WN.cond_layer) - WN.res_skip_layers = remove(WN.res_skip_layers) - return waveglow - - -def remove(conv_list): - new_conv_list = torch.nn.ModuleList() - for old_conv in conv_list: - old_conv = torch.nn.utils.remove_weight_norm(old_conv) - new_conv_list.append(old_conv) - return new_conv_list diff --git a/spaces/mshukor/UnIVAL/fairseq/tests/test_activation_checkpointing.py b/spaces/mshukor/UnIVAL/fairseq/tests/test_activation_checkpointing.py deleted file mode 100644 index 647a9572886f8aff09a4aadc0b21e1d5817ff38e..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/tests/test_activation_checkpointing.py +++ /dev/null @@ -1,79 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest - -import torch -import torch.nn as nn -from fairseq.modules.checkpoint_activations import checkpoint_wrapper -from torch.utils.checkpoint import checkpoint - - -class Model(nn.Module): - def __init__( - self, use_pytorch_checkpoint=False, use_fairseq_checkpoint=False, **kwargs - ): - super().__init__() - torch.manual_seed(0) - self.use_pytorch_checkpoint = use_pytorch_checkpoint - self.ffn = nn.Sequential( - nn.Linear(32, 128), - # add a Dropout layer to test RNG save/restore - nn.Dropout(p=0.5), - nn.Linear(128, 32), - ) - if use_fairseq_checkpoint: - self.ffn = checkpoint_wrapper(self.ffn, **kwargs) - self.out = nn.Linear(32, 1) - - def forward(self, x): - if self.use_pytorch_checkpoint: - x = checkpoint(self.ffn, x) - else: - x = self.ffn(x) - return self.out(x) - - -class TestActivationCheckpointing(unittest.TestCase): - def _test_checkpoint_wrapper(self, device, log_memory_usage=False): - def get_loss_and_gnorm(model): - torch.manual_seed(1) - input = torch.rand(2, 16, 32).requires_grad_(True).to(device) - model.zero_grad() - loss = model(input).sum() - loss.backward() - gnorm = torch.norm( - torch.stack([torch.norm(p.grad.detach()) for p in model.parameters()]) - ) - return {"loss": loss, "gnorm": gnorm} - - model = Model().to(device) - no_cpt = get_loss_and_gnorm(model) - - model = Model(use_pytorch_checkpoint=True).to(device) - pyt_cpt = get_loss_and_gnorm(model) - torch.testing.assert_allclose(no_cpt["loss"], pyt_cpt["loss"]) - torch.testing.assert_allclose(no_cpt["gnorm"], pyt_cpt["gnorm"]) - - model = Model(use_fairseq_checkpoint=True).to(device) - fairseq_cpt = get_loss_and_gnorm(model) - torch.testing.assert_allclose(no_cpt["loss"], fairseq_cpt["loss"]) - torch.testing.assert_allclose(no_cpt["gnorm"], fairseq_cpt["gnorm"]) - - model = Model(use_fairseq_checkpoint=True, offload_to_cpu=True).to(device) - fairseq_cpt_offload = get_loss_and_gnorm(model) - torch.testing.assert_allclose(no_cpt["loss"], fairseq_cpt_offload["loss"]) - torch.testing.assert_allclose(no_cpt["gnorm"], fairseq_cpt_offload["gnorm"]) - - def test_checkpoint_wrapper_cpu(self): - self._test_checkpoint_wrapper(device=torch.device("cpu")) - - @unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") - def test_checkpoint_wrapper_cuda(self): - self._test_checkpoint_wrapper(device=torch.device("cuda")) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/mshukor/UnIVAL/fairseq/tests/test_export.py b/spaces/mshukor/UnIVAL/fairseq/tests/test_export.py deleted file mode 100644 index b380697b9aff8799f90c1e0819e408826ecf2932..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/tests/test_export.py +++ /dev/null @@ -1,121 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import tempfile -import unittest - -import torch -from fairseq.data.dictionary import Dictionary -from fairseq.models.transformer import TransformerModel -from fairseq.modules import multihead_attention, sinusoidal_positional_embedding -from fairseq.tasks.fairseq_task import LegacyFairseqTask - - -DEFAULT_TEST_VOCAB_SIZE = 100 - - -class DummyTask(LegacyFairseqTask): - def __init__(self, args): - super().__init__(args) - self.dictionary = get_dummy_dictionary() - if getattr(self.args, "ctc", False): - self.dictionary.add_symbol("") - self.src_dict = self.dictionary - self.tgt_dict = self.dictionary - - @property - def source_dictionary(self): - return self.src_dict - - @property - def target_dictionary(self): - return self.dictionary - - -def get_dummy_dictionary(vocab_size=DEFAULT_TEST_VOCAB_SIZE): - dummy_dict = Dictionary() - # add dummy symbol to satisfy vocab size - for id, _ in enumerate(range(vocab_size)): - dummy_dict.add_symbol("{}".format(id), 1000) - return dummy_dict - - -def get_dummy_task_and_parser(): - """ - Return a dummy task and argument parser, which can be used to - create a model/criterion. - """ - parser = argparse.ArgumentParser( - description="test_dummy_s2s_task", argument_default=argparse.SUPPRESS - ) - DummyTask.add_args(parser) - args = parser.parse_args([]) - task = DummyTask.setup_task(args) - return task, parser - - -def _test_save_and_load(scripted_module): - with tempfile.NamedTemporaryFile() as f: - scripted_module.save(f.name) - torch.jit.load(f.name) - - -class TestExportModels(unittest.TestCase): - def test_export_multihead_attention(self): - module = multihead_attention.MultiheadAttention(embed_dim=8, num_heads=2) - scripted = torch.jit.script(module) - _test_save_and_load(scripted) - - def test_incremental_state_multihead_attention(self): - module1 = multihead_attention.MultiheadAttention(embed_dim=8, num_heads=2) - module1 = torch.jit.script(module1) - module2 = multihead_attention.MultiheadAttention(embed_dim=8, num_heads=2) - module2 = torch.jit.script(module2) - - state = {} - state = module1.set_incremental_state(state, "key", {"a": torch.tensor([1])}) - state = module2.set_incremental_state(state, "key", {"a": torch.tensor([2])}) - v1 = module1.get_incremental_state(state, "key")["a"] - v2 = module2.get_incremental_state(state, "key")["a"] - - self.assertEqual(v1, 1) - self.assertEqual(v2, 2) - - def test_positional_embedding(self): - module = sinusoidal_positional_embedding.SinusoidalPositionalEmbedding( - embedding_dim=8, padding_idx=1 - ) - scripted = torch.jit.script(module) - _test_save_and_load(scripted) - - @unittest.skipIf( - torch.__version__ < "1.6.0", "Targeting OSS scriptability for the 1.6 release" - ) - def test_export_transformer(self): - task, parser = get_dummy_task_and_parser() - TransformerModel.add_args(parser) - args = parser.parse_args([]) - model = TransformerModel.build_model(args, task) - scripted = torch.jit.script(model) - _test_save_and_load(scripted) - - @unittest.skipIf( - torch.__version__ < "1.6.0", "Targeting OSS scriptability for the 1.6 release" - ) - def test_export_transformer_no_token_pos_emb(self): - task, parser = get_dummy_task_and_parser() - TransformerModel.add_args(parser) - args = parser.parse_args([]) - args.no_token_positional_embeddings = True - model = TransformerModel.build_model(args, task) - scripted = torch.jit.script(model) - _test_save_and_load(scripted) - - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/mthsk/sovits-100orangejuice/hubert/hubert_model_onnx.py b/spaces/mthsk/sovits-100orangejuice/hubert/hubert_model_onnx.py deleted file mode 100644 index d18f3c2a0fc29592a573a9780308d38f059640b9..0000000000000000000000000000000000000000 --- a/spaces/mthsk/sovits-100orangejuice/hubert/hubert_model_onnx.py +++ /dev/null @@ -1,217 +0,0 @@ -import copy -import random -from typing import Optional, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as t_func -from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present - - -class Hubert(nn.Module): - def __init__(self, num_label_embeddings: int = 100, mask: bool = True): - super().__init__() - self._mask = mask - self.feature_extractor = FeatureExtractor() - self.feature_projection = FeatureProjection() - self.positional_embedding = PositionalConvEmbedding() - self.norm = nn.LayerNorm(768) - self.dropout = nn.Dropout(0.1) - self.encoder = TransformerEncoder( - nn.TransformerEncoderLayer( - 768, 12, 3072, activation="gelu", batch_first=True - ), - 12, - ) - self.proj = nn.Linear(768, 256) - - self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_()) - self.label_embedding = nn.Embedding(num_label_embeddings, 256) - - def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - mask = None - if self.training and self._mask: - mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2) - x[mask] = self.masked_spec_embed.to(x.dtype) - return x, mask - - def encode( - self, x: torch.Tensor, layer: Optional[int] = None - ) -> Tuple[torch.Tensor, torch.Tensor]: - x = self.feature_extractor(x) - x = self.feature_projection(x.transpose(1, 2)) - x, mask = self.mask(x) - x = x + self.positional_embedding(x) - x = self.dropout(self.norm(x)) - x = self.encoder(x, output_layer=layer) - return x, mask - - def logits(self, x: torch.Tensor) -> torch.Tensor: - logits = torch.cosine_similarity( - x.unsqueeze(2), - self.label_embedding.weight.unsqueeze(0).unsqueeze(0), - dim=-1, - ) - return logits / 0.1 - - -class HubertSoft(Hubert): - def __init__(self): - super().__init__() - - def units(self, wav: torch.Tensor) -> torch.Tensor: - wav = t_func.pad(wav, ((400 - 320) // 2, (400 - 320) // 2)) - x, _ = self.encode(wav) - return self.proj(x) - - def forward(self, x): - return self.units(x) - -class FeatureExtractor(nn.Module): - def __init__(self): - super().__init__() - self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False) - self.norm0 = nn.GroupNorm(512, 512) - self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False) - self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = t_func.gelu(self.norm0(self.conv0(x))) - x = t_func.gelu(self.conv1(x)) - x = t_func.gelu(self.conv2(x)) - x = t_func.gelu(self.conv3(x)) - x = t_func.gelu(self.conv4(x)) - x = t_func.gelu(self.conv5(x)) - x = t_func.gelu(self.conv6(x)) - return x - - -class FeatureProjection(nn.Module): - def __init__(self): - super().__init__() - self.norm = nn.LayerNorm(512) - self.projection = nn.Linear(512, 768) - self.dropout = nn.Dropout(0.1) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.norm(x) - x = self.projection(x) - x = self.dropout(x) - return x - - -class PositionalConvEmbedding(nn.Module): - def __init__(self): - super().__init__() - self.conv = nn.Conv1d( - 768, - 768, - kernel_size=128, - padding=128 // 2, - groups=16, - ) - self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.conv(x.transpose(1, 2)) - x = t_func.gelu(x[:, :, :-1]) - return x.transpose(1, 2) - - -class TransformerEncoder(nn.Module): - def __init__( - self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int - ) -> None: - super(TransformerEncoder, self).__init__() - self.layers = nn.ModuleList( - [copy.deepcopy(encoder_layer) for _ in range(num_layers)] - ) - self.num_layers = num_layers - - def forward( - self, - src: torch.Tensor, - mask: torch.Tensor = None, - src_key_padding_mask: torch.Tensor = None, - output_layer: Optional[int] = None, - ) -> torch.Tensor: - output = src - for layer in self.layers[:output_layer]: - output = layer( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask - ) - return output - - -def _compute_mask( - shape: Tuple[int, int], - mask_prob: float, - mask_length: int, - device: torch.device, - min_masks: int = 0, -) -> torch.Tensor: - batch_size, sequence_length = shape - - if mask_length < 1: - raise ValueError("`mask_length` has to be bigger than 0.") - - if mask_length > sequence_length: - raise ValueError( - f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`" - ) - - # compute number of masked spans in batch - num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random()) - num_masked_spans = max(num_masked_spans, min_masks) - - # make sure num masked indices <= sequence_length - if num_masked_spans * mask_length > sequence_length: - num_masked_spans = sequence_length // mask_length - - # SpecAugment mask to fill - mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool) - - # uniform distribution to sample from, make sure that offset samples are < sequence_length - uniform_dist = torch.ones( - (batch_size, sequence_length - (mask_length - 1)), device=device - ) - - # get random indices to mask - mask_indices = torch.multinomial(uniform_dist, num_masked_spans) - - # expand masked indices to masked spans - mask_indices = ( - mask_indices.unsqueeze(dim=-1) - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - offsets = ( - torch.arange(mask_length, device=device)[None, None, :] - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - mask_idxs = mask_indices + offsets - - # scatter indices to mask - mask = mask.scatter(1, mask_idxs, True) - - return mask - - -def hubert_soft( - path: str, -) -> HubertSoft: - r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`. - Args: - path (str): path of a pretrained model - """ - hubert = HubertSoft() - checkpoint = torch.load(path) - consume_prefix_in_state_dict_if_present(checkpoint, "module.") - hubert.load_state_dict(checkpoint) - hubert.eval() - return hubert diff --git a/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/downloads.py b/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/downloads.py deleted file mode 100644 index d7b87cb2cadd22fcdfaafc7fd56fc29e14d9a538..0000000000000000000000000000000000000000 --- a/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/downloads.py +++ /dev/null @@ -1,153 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Download utils -""" - -import os -import platform -import subprocess -import time -import urllib -from pathlib import Path -from zipfile import ZipFile - -import requests -import torch - - -def gsutil_getsize(url=''): - # gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du - s = subprocess.check_output(f'gsutil du {url}', shell=True).decode('utf-8') - return eval(s.split(' ')[0]) if len(s) else 0 # bytes - - -def safe_download(file, url, url2=None, min_bytes=1E0, error_msg=''): - # Attempts to download file from url or url2, checks and removes incomplete downloads < min_bytes - file = Path(file) - assert_msg = f"Downloaded file '{file}' does not exist or size is < min_bytes={min_bytes}" - try: # url1 - print(f'Downloading {url} to {file}...') - torch.hub.download_url_to_file(url, str(file)) - assert file.exists() and file.stat().st_size > min_bytes, assert_msg # check - except Exception as e: # url2 - file.unlink(missing_ok=True) # remove partial downloads - print(f'ERROR: {e}\nRe-attempting {url2 or url} to {file}...') - os.system(f"curl -L '{url2 or url}' -o '{file}' --retry 3 -C -") # curl download, retry and resume on fail - finally: - if not file.exists() or file.stat().st_size < min_bytes: # check - file.unlink(missing_ok=True) # remove partial downloads - print(f"ERROR: {assert_msg}\n{error_msg}") - print('') - - -def attempt_download(file, repo='ultralytics/yolov5'): # from utils.downloads import *; attempt_download() - # Attempt file download if does not exist - file = Path(str(file).strip().replace("'", '')) - - if not file.exists(): - # URL specified - name = Path(urllib.parse.unquote(str(file))).name # decode '%2F' to '/' etc. - if str(file).startswith(('http:/', 'https:/')): # download - url = str(file).replace(':/', '://') # Pathlib turns :// -> :/ - file = name.split('?')[0] # parse authentication https://url.com/file.txt?auth... - if Path(file).is_file(): - print(f'Found {url} locally at {file}') # file already exists - else: - safe_download(file=file, url=url, min_bytes=1E5) - return file - - # GitHub assets - file.parent.mkdir(parents=True, exist_ok=True) # make parent dir (if required) - try: - response = requests.get(f'https://api.github.com/repos/{repo}/releases/latest').json() # github api - assets = [x['name'] for x in response['assets']] # release assets, i.e. ['yolov5s.pt', 'yolov5m.pt', ...] - tag = response['tag_name'] # i.e. 'v1.0' - except Exception: # fallback plan - assets = ['yolov5n.pt', 'yolov5s.pt', 'yolov5m.pt', 'yolov5l.pt', 'yolov5x.pt', - 'yolov5n6.pt', 'yolov5s6.pt', 'yolov5m6.pt', 'yolov5l6.pt', 'yolov5x6.pt'] - try: - tag = subprocess.check_output('git tag', shell=True, stderr=subprocess.STDOUT).decode().split()[-1] - except Exception: - tag = 'v6.0' # current release - - if name in assets: - safe_download(file, - url=f'https://github.com/{repo}/releases/download/{tag}/{name}', - # url2=f'https://storage.googleapis.com/{repo}/ckpt/{name}', # backup url (optional) - min_bytes=1E5, - error_msg=f'{file} missing, try downloading from https://github.com/{repo}/releases/') - - return str(file) - - -def gdrive_download(id='16TiPfZj7htmTyhntwcZyEEAejOUxuT6m', file='tmp.zip'): - # Downloads a file from Google Drive. from yolov5.utils.downloads import *; gdrive_download() - t = time.time() - file = Path(file) - cookie = Path('cookie') # gdrive cookie - print(f'Downloading https://drive.google.com/uc?export=download&id={id} as {file}... ', end='') - file.unlink(missing_ok=True) # remove existing file - cookie.unlink(missing_ok=True) # remove existing cookie - - # Attempt file download - out = "NUL" if platform.system() == "Windows" else "/dev/null" - os.system(f'curl -c ./cookie -s -L "drive.google.com/uc?export=download&id={id}" > {out}') - if os.path.exists('cookie'): # large file - s = f'curl -Lb ./cookie "drive.google.com/uc?export=download&confirm={get_token()}&id={id}" -o {file}' - else: # small file - s = f'curl -s -L -o {file} "drive.google.com/uc?export=download&id={id}"' - r = os.system(s) # execute, capture return - cookie.unlink(missing_ok=True) # remove existing cookie - - # Error check - if r != 0: - file.unlink(missing_ok=True) # remove partial - print('Download error ') # raise Exception('Download error') - return r - - # Unzip if archive - if file.suffix == '.zip': - print('unzipping... ', end='') - ZipFile(file).extractall(path=file.parent) # unzip - file.unlink() # remove zip - - print(f'Done ({time.time() - t:.1f}s)') - return r - - -def get_token(cookie="./cookie"): - with open(cookie) as f: - for line in f: - if "download" in line: - return line.split()[-1] - return "" - -# Google utils: https://cloud.google.com/storage/docs/reference/libraries ---------------------------------------------- -# -# -# def upload_blob(bucket_name, source_file_name, destination_blob_name): -# # Uploads a file to a bucket -# # https://cloud.google.com/storage/docs/uploading-objects#storage-upload-object-python -# -# storage_client = storage.Client() -# bucket = storage_client.get_bucket(bucket_name) -# blob = bucket.blob(destination_blob_name) -# -# blob.upload_from_filename(source_file_name) -# -# print('File {} uploaded to {}.'.format( -# source_file_name, -# destination_blob_name)) -# -# -# def download_blob(bucket_name, source_blob_name, destination_file_name): -# # Uploads a blob from a bucket -# storage_client = storage.Client() -# bucket = storage_client.get_bucket(bucket_name) -# blob = bucket.blob(source_blob_name) -# -# blob.download_to_filename(destination_file_name) -# -# print('Blob {} downloaded to {}.'.format( -# source_blob_name, -# destination_file_name)) diff --git a/spaces/nateraw/lavila/lavila/models/models.py b/spaces/nateraw/lavila/lavila/models/models.py deleted file mode 100644 index a90aee9076888f1d38307b67ba615f0bf173bbeb..0000000000000000000000000000000000000000 --- a/spaces/nateraw/lavila/lavila/models/models.py +++ /dev/null @@ -1,1218 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import timm -import torch -import torch.nn as nn -import torch.nn.functional as F -from transformers import DistilBertModel, GPT2LMHeadModel - -import lavila.models.loss as loss -from lavila.models.gpt2_gated import GPT2LMHeadModel as GatedGPT2LMHeadModel -from lavila.models.gpt2_gated import augment_gpt2_config -from lavila.models.narrator import VCLM_HF -from lavila.models.openai_clip import load as load_openai_clip -from lavila.models.openai_model import QuickGELU, Transformer -from lavila.models.timesformer import SpaceTimeTransformer -from lavila.models.utils import remap_keys, rsetattr - - -class VideoClassifier(nn.Module): - def __init__(self, - vision_model: nn.Module, - dropout: float, - num_classes: int, - **kwargs, - ): - super().__init__() - self.visual = vision_model - self.dropout = nn.Dropout(dropout) - self.fc_cls = nn.Linear(vision_model.num_features, num_classes, bias=True) - - self.fc_cls.weight.data.normal_(mean=0.0, std=0.01) - self.fc_cls.bias.data.zero_() - - def forward(self, image, use_checkpoint=False): - image_embed = self.visual(image, use_checkpoint=use_checkpoint) - if isinstance(image_embed, list): - assert len(image_embed) == 1 - image_embed = image_embed[0] - logit = self.fc_cls(self.dropout(image_embed)) - return logit - - -class VideoClassifierMultiHead(nn.Module): - def __init__(self, - vision_model: nn.Module, - dropout: float, - num_classes_list: list, - **kwargs, - ): - super().__init__() - self.visual = vision_model - self.dropout = nn.Dropout(dropout) - self.fc_cls = nn.ModuleList( - [nn.Linear(vision_model.num_features, num_classes, bias=True) for num_classes in num_classes_list] - ) - - for m in self.fc_cls: - m.weight.data.normal_(mean=0.0, std=0.01) - m.bias.data.zero_() - - def forward(self, image, use_checkpoint=False): - image_embed = self.visual(image, use_checkpoint=use_checkpoint) - if isinstance(image_embed, list): - assert len(image_embed) == 1 - image_embed = image_embed[0] - logit_list = [m(self.dropout(image_embed)) for m in self.fc_cls] - return logit_list - - -class CLIP(nn.Module): - def __init__(self, - embed_dim: int, - # vision - vision_width: int, - vision_model: nn.Module, - # text - context_length: int, - vocab_size: int, - transformer_width: int, - transformer_heads: int, - transformer_layers: int, - tempearture_init=0.07, - **kwargs, - ): - super().__init__() - - self.context_length = context_length - self.vision_width = vision_width - - self.visual = vision_model - self.transformer = Transformer( - width=transformer_width, - layers=transformer_layers, - heads=transformer_heads, - attn_mask=self.build_attention_mask(), - ) - - self.vocab_size = vocab_size - self.token_embedding = nn.Embedding(vocab_size, transformer_width) - self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width)) - self.ln_final = nn.LayerNorm(transformer_width) # used to be `models.transformer.LayerNorm`` - - self.image_projection = nn.Parameter(torch.empty(vision_width, embed_dim)) - self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim)) - print("=> initialize initial temperature with {}".format(tempearture_init)) - self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / tempearture_init)) - - self.initialize_parameters() - - def initialize_parameters(self): - nn.init.normal_(self.token_embedding.weight, std=0.02) - nn.init.normal_(self.positional_embedding, std=0.01) - - proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5) - attn_std = self.transformer.width ** -0.5 - fc_std = (2 * self.transformer.width) ** -0.5 - for block in self.transformer.resblocks: - nn.init.normal_(block.attn.in_proj_weight, std=attn_std) - nn.init.normal_(block.attn.out_proj.weight, std=proj_std) - nn.init.normal_(block.mlp.c_fc.weight, std=fc_std) - nn.init.normal_(block.mlp.c_proj.weight, std=proj_std) - - nn.init.normal_(self.image_projection, std=self.vision_width ** -0.5) - nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5) - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - def encode_image(self, image, use_checkpoint=False, apply_project=True): - x = self.visual(image, use_checkpoint=use_checkpoint) - if isinstance(x, list): - assert len(x) == 1 - x = x[0] - if not apply_project: - return x - x = x @ self.image_projection - - return x - - def encode_text(self, text, use_checkpoint=False): - x = self.token_embedding(text) # [batch_size, n_ctx, d_model] - x = x + self.positional_embedding - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x, use_checkpoint=use_checkpoint) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x) - - # x.shape = [batch_size, n_ctx, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection - - return x - - def forward(self, image, text, use_checkpoint=False, norm_embed=False): - image_embed = self.encode_image(image, use_checkpoint=use_checkpoint) - text_embed = self.encode_text(text, use_checkpoint=use_checkpoint) - - if norm_embed: - image_embed = F.normalize(image_embed, dim=-1) - text_embed = F.normalize(text_embed, dim=-1) - return {'image_embed': image_embed, - 'text_embed': text_embed, - 'logit_scale': self.logit_scale.exp()} - - -class CLIP_HF(nn.Module): - def __init__(self, - embed_dim: int, - # vision - vision_width: int, - vision_model: nn.Module, - # text - text_width: int, - text_model: nn.Module, - text_use_cls_token: bool, - text_is_regressive: bool, - tempearture_init=0.07, - **kwargs, - ): - super().__init__() - - self.vision_width = vision_width - self.visual = vision_model - self.text_width = text_width - self.textual = text_model - self.text_use_cls_token = text_use_cls_token - self.text_is_regressive = text_is_regressive - - if 'projection' not in kwargs: - self.projection = 'default' - else: - self.projection = kwargs['projection'] - if self.projection == 'default': - self.image_projection = nn.Parameter(torch.empty(vision_width, embed_dim)) - self.text_projection = nn.Parameter(torch.empty(text_width, embed_dim)) - elif self.projection == 'frozen_in_time': - self.image_projection = nn.Sequential( - nn.Linear(vision_width, embed_dim) - ) - self.text_projection = nn.Sequential( - nn.ReLU(), - nn.Linear(text_width, embed_dim) - ) - print("=> initialize initial temperature with {}".format(tempearture_init)) - self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / tempearture_init)) - - self.initialize_parameters() - - def initialize_parameters(self): - if self.projection == 'default': - nn.init.normal_(self.image_projection, std=self.vision_width ** -0.5) - nn.init.normal_(self.text_projection, std=self.text_width ** -0.5) - else: - nn.init.normal_(self.image_projection[0].weight, std=self.vision_width ** -0.5) - nn.init.normal_(self.text_projection[1].weight, std=self.text_width ** -0.5) - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - def encode_image(self, image, use_checkpoint=False, apply_project=True): - x = self.visual(image, use_checkpoint=use_checkpoint) - if isinstance(x, list): - assert len(x) == 1 - x = x[0] - if not apply_project: - return x - if self.projection == 'default': - x = x @ self.image_projection - else: - x = self.image_projection(x) - - return x - - def encode_text(self, text, attention_mask=None, use_checkpoint=False): - if use_checkpoint: - if isinstance(self.textual, DistilBertModel): - pass - # print("DistilBertModel does not support gradient checkpointing. Skipping even if use_checkpoint=True") - else: - self.textual.gradient_checkpointing_enable() - else: - self.textual.gradient_checkpointing_disable() - # text, attention_mask = text.squeeze(1), attention_mask.squeeze(1) - # ^ uncomment this only when doing local debugging (distributed=False) - x = self.textual(text, attention_mask=attention_mask) - - if self.text_is_regressive: - # gpt-style - x = x.last_hidden_state - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] - else: - # bert-style - if self.text_use_cls_token: - x = x.last_hidden_state - x = x[torch.arange(x.shape[0]), 0, :] - else: - x = x.pooler_output - if self.projection == 'default': - x = x @ self.text_projection - else: - x = self.text_projection(x) - - return x - - def forward(self, image, text, mask=None, use_checkpoint=False, norm_embed=False): - image_embed = self.encode_image(image, use_checkpoint=use_checkpoint) - text_embed = self.encode_text(text, attention_mask=mask, use_checkpoint=use_checkpoint) - - if norm_embed: - image_embed = F.normalize(image_embed, dim=-1) - text_embed = F.normalize(text_embed, dim=-1) - return {'image_embed': image_embed, - 'text_embed': text_embed, - 'logit_scale': self.logit_scale.exp()} - - -def get_loss(model, args, tokenizer=None): - if model.startswith('CLIP'): - return loss.CLIPLoss( - use_vissl=args.contrastive_use_vissl, - cache_labels=True, - rank=args.rank, - world_size=args.world_size, - ) - elif model.startswith('VCLM'): - return loss.CaptionLoss(tokenizer=tokenizer) - else: - raise NotImplementedError - - -def get_metric_names(model): - if model.startswith('CLIP'): - return ['loss', 'clip_loss', 'clip_acc'] - elif model.startswith('VCLM'): - return ['loss', 'caption_loss', 'caption_acc', 'ppl'] - else: - raise NotImplementedError - - -def CLIP_OPENAI_TIMESFORMER_BASE( - num_frames=4, timesformer_gated_xattn=False, drop_path_rate=0, timesformer_freeze_space=False, - temperature_init=0.07, project_embed_dim=256, **kwargs, -): - vision_model = SpaceTimeTransformer( - num_frames=num_frames, - time_init='zeros', - attention_style='frozen-in-time', - ln_pre=True, - act_layer=QuickGELU, - is_tanh_gating=timesformer_gated_xattn, - drop_path_rate=drop_path_rate, - ) - clip_model, _ = load_openai_clip('ViT-B/16', 'cpu') - print("=> Loading CLIP (ViT-B/16) weights") - remapped_state_dict = remap_keys(clip_model.visual.state_dict(), transformer_layers=12) - res = vision_model.load_state_dict(remapped_state_dict, strict=False) - print(res) - if timesformer_freeze_space: - print("=> Freeze the space part in TimeSformer") - freeze_list, unfreeze_list = [], [] - for n, p in vision_model.named_parameters(): - if n not in remapped_state_dict or n == 'cls_token': - p.requires_grad = True - unfreeze_list.append(n) - else: - p.requires_grad = False - freeze_list.append(n) - print("Freeze the pretrained parts in TimeSformer: {}".format(freeze_list)) - print(" Learn the rest parts in TimeSformer: {}".format(unfreeze_list)) - - vision_model.head = nn.Identity() - vision_model.pre_logits = nn.Identity() - vision_model.fc = nn.Identity() - model = CLIP( - embed_dim=project_embed_dim, - vision_width=768, - vision_model=vision_model, - context_length=77, - vocab_size=49408, - transformer_width=512, - transformer_heads=8, - transformer_layers=12, - tempearture_init=temperature_init, - **kwargs - ) - model.transformer.load_state_dict(clip_model.transformer.state_dict()) - model.token_embedding.load_state_dict(clip_model.token_embedding.state_dict()) - model.positional_embedding.data.copy_(clip_model.positional_embedding.data) - model.ln_final.load_state_dict(clip_model.ln_final.state_dict()) - if project_embed_dim == clip_model.text_projection.shape[1]: - print("=> Loading CLIP's text_projection, image_projection and logit_scale directly") - model.image_projection.data.copy_(clip_model.visual.proj.data) - model.text_projection.data.copy_(clip_model.text_projection.data) - model.logit_scale.data.copy_(clip_model.logit_scale.data) - return model - - -def CLIP_OPENAI_TIMESFORMER_LARGE( - num_frames=4, timesformer_gated_xattn=False, drop_path_rate=0, timesformer_freeze_space=False, - temperature_init=0.07, project_embed_dim=256, **kwargs, -): - vision_model = SpaceTimeTransformer( - img_size=224, patch_size=14, - embed_dim=1024, depth=24, num_heads=16, - num_frames=num_frames, - time_init='zeros', - attention_style='frozen-in-time', - ln_pre=True, - act_layer=QuickGELU, - is_tanh_gating=timesformer_gated_xattn, - drop_path_rate=drop_path_rate, - ) - clip_model, _ = load_openai_clip('ViT-L/14', 'cpu') - print("=> Loading CLIP (ViT-L/14) weights") - remapped_state_dict = remap_keys(clip_model.visual.state_dict(), transformer_layers=24) - res = vision_model.load_state_dict(remapped_state_dict, strict=False) - print(res) - if timesformer_freeze_space: - print("=> Freeze the space part in TimeSformer") - freeze_list, unfreeze_list = [], [] - for n, p in vision_model.named_parameters(): - if n not in remapped_state_dict or n == 'cls_token': - p.requires_grad = True - unfreeze_list.append(n) - else: - p.requires_grad = False - freeze_list.append(n) - print("Freeze the pretrained parts in TimeSformer: {}".format(freeze_list)) - print(" Learn the rest parts in TimeSformer: {}".format(unfreeze_list)) - - vision_model.head = nn.Identity() - vision_model.pre_logits = nn.Identity() - vision_model.fc = nn.Identity() - model = CLIP( - embed_dim=project_embed_dim, - vision_width=1024, - vision_model=vision_model, - context_length=77, - vocab_size=49408, - transformer_width=768, - transformer_heads=12, - transformer_layers=12, - tempearture_init=temperature_init, - **kwargs - ) - model.transformer.load_state_dict(clip_model.transformer.state_dict()) - model.token_embedding.load_state_dict(clip_model.token_embedding.state_dict()) - model.positional_embedding.data.copy_(clip_model.positional_embedding.data) - model.ln_final.load_state_dict(clip_model.ln_final.state_dict()) - if project_embed_dim == clip_model.text_projection.shape[1]: - print("=> Loading CLIP's text_projection, image_projection and logit_scale directly") - model.image_projection.data.copy_(clip_model.visual.proj.data) - model.text_projection.data.copy_(clip_model.text_projection.data) - model.logit_scale.data.copy_(clip_model.logit_scale.data) - return model - - -def CLIP_OPENAI_TIMESFORMER_LARGE_336PX( - num_frames=4, timesformer_gated_xattn=False, drop_path_rate=0, timesformer_freeze_space=False, - temperature_init=0.07, project_embed_dim=256, **kwargs, -): - vision_model = SpaceTimeTransformer( - img_size=336, patch_size=14, - embed_dim=1024, depth=24, num_heads=16, - num_frames=num_frames, - time_init='zeros', - attention_style='frozen-in-time', - ln_pre=True, - act_layer=QuickGELU, - is_tanh_gating=timesformer_gated_xattn, - drop_path_rate=drop_path_rate, - ) - clip_model, _ = load_openai_clip('ViT-L/14@336px', 'cpu') - print("=> Loading CLIP (ViT-L/14@336px) weights") - remapped_state_dict = remap_keys(clip_model.visual.state_dict(), transformer_layers=24) - res = vision_model.load_state_dict(remapped_state_dict, strict=False) - print(res) - if timesformer_freeze_space: - print("=> Freeze the space part in TimeSformer") - freeze_list, unfreeze_list = [], [] - for n, p in vision_model.named_parameters(): - if n not in remapped_state_dict or n == 'cls_token': - p.requires_grad = True - unfreeze_list.append(n) - else: - p.requires_grad = False - freeze_list.append(n) - print("Freeze the pretrained parts in TimeSformer: {}".format(freeze_list)) - print(" Learn the rest parts in TimeSformer: {}".format(unfreeze_list)) - - vision_model.head = nn.Identity() - vision_model.pre_logits = nn.Identity() - vision_model.fc = nn.Identity() - model = CLIP( - embed_dim=project_embed_dim, - vision_width=1024, - vision_model=vision_model, - context_length=77, - vocab_size=49408, - transformer_width=768, - transformer_heads=12, - transformer_layers=12, - tempearture_init=temperature_init, - **kwargs - ) - model.transformer.load_state_dict(clip_model.transformer.state_dict()) - model.token_embedding.load_state_dict(clip_model.token_embedding.state_dict()) - model.positional_embedding.data.copy_(clip_model.positional_embedding.data) - model.ln_final.load_state_dict(clip_model.ln_final.state_dict()) - if project_embed_dim == clip_model.text_projection.shape[1]: - print("=> Loading CLIP's text_projection, image_projection and logit_scale directly") - model.image_projection.data.copy_(clip_model.visual.proj.data) - model.text_projection.data.copy_(clip_model.text_projection.data) - model.logit_scale.data.copy_(clip_model.logit_scale.data) - return model - - -def CLIP_OPENAI_TIMESFORMER_BASE_DISTILBERT_BASE( - num_frames=4, timesformer_gated_xattn=False, drop_path_rate=0, timesformer_freeze_space=False, - temperature_init=0.07, project_embed_dim=256, **kwargs, -): - vision_model = SpaceTimeTransformer( - num_frames=num_frames, - time_init='zeros', - attention_style='frozen-in-time', - ln_pre=True, - act_layer=QuickGELU, - is_tanh_gating=timesformer_gated_xattn, - drop_path_rate=drop_path_rate, - ) - clip_model, _ = load_openai_clip('ViT-B/16', 'cpu') - print("=> Loading CLIP (ViT-B/16) weights") - remapped_state_dict = remap_keys(clip_model.visual.state_dict(), transformer_layers=12) - res = vision_model.load_state_dict(remapped_state_dict, strict=False) - print(res) - if timesformer_freeze_space: - print("=> Freeze the space part in TimeSformer") - freeze_list, unfreeze_list = [], [] - for n, p in vision_model.named_parameters(): - if n not in remapped_state_dict or n == 'cls_token': - p.requires_grad = True - unfreeze_list.append(n) - else: - p.requires_grad = False - freeze_list.append(n) - print("Freeze the pretrained parts in TimeSformer: {}".format(freeze_list)) - print(" Learn the rest parts in TimeSformer: {}".format(unfreeze_list)) - - vision_model.head = nn.Identity() - vision_model.pre_logits = nn.Identity() - vision_model.fc = nn.Identity() - - text_model = DistilBertModel.from_pretrained( - 'distilbert-base-uncased', - ) - kwargs.pop('text_use_cls_token') # ignore args.use_cls_token since DistilBert does not have pooler on top - model = CLIP_HF( - embed_dim=project_embed_dim, - vision_width=vision_model.embed_dim, - vision_model=vision_model, - text_width=768, - text_model=text_model, - text_use_cls_token=True, # DistilBert does not have pooler on top - text_is_regressive=False, - tempearture_init=temperature_init, - **kwargs, - ) - - return model - - -def CLIP_OPENAI_TIMESFORMER_LARGE_DISTILBERT_BASE( - num_frames=4, timesformer_gated_xattn=False, drop_path_rate=0, timesformer_freeze_space=False, - temperature_init=0.07, project_embed_dim=256, **kwargs, -): - vision_model = SpaceTimeTransformer( - img_size=224, patch_size=14, - embed_dim=1024, depth=24, num_heads=16, - num_frames=num_frames, - time_init='zeros', - attention_style='frozen-in-time', - ln_pre=True, - act_layer=QuickGELU, - is_tanh_gating=timesformer_gated_xattn, - drop_path_rate=drop_path_rate, - ) - clip_model, _ = load_openai_clip('ViT-L/14', 'cpu') - print("=> Loading CLIP (ViT-L/14) weights") - remapped_state_dict = remap_keys(clip_model.visual.state_dict(), transformer_layers=24) - res = vision_model.load_state_dict(remapped_state_dict, strict=False) - print(res) - if timesformer_freeze_space: - print("=> Freeze the space part in TimeSformer") - freeze_list, unfreeze_list = [], [] - for n, p in vision_model.named_parameters(): - if n not in remapped_state_dict or n == 'cls_token': - p.requires_grad = True - unfreeze_list.append(n) - else: - p.requires_grad = False - freeze_list.append(n) - print("Freeze the pretrained parts in TimeSformer: {}".format(freeze_list)) - print(" Learn the rest parts in TimeSformer: {}".format(unfreeze_list)) - - vision_model.head = nn.Identity() - vision_model.pre_logits = nn.Identity() - vision_model.fc = nn.Identity() - - text_model = DistilBertModel.from_pretrained( - 'distilbert-base-uncased', - ) - kwargs.pop('text_use_cls_token') # ignore args.use_cls_token since DistilBert does not have pooler on top - model = CLIP_HF( - embed_dim=project_embed_dim, - vision_width=vision_model.embed_dim, - vision_model=vision_model, - text_width=768, - text_model=text_model, - text_use_cls_token=True, # DistilBert does not have pooler on top - text_is_regressive=False, - tempearture_init=temperature_init, - **kwargs, - ) - - return model - - -def CLIP_OPENAI_TIMESFORMER_LARGE_336PX_DISTILBERT_BASE( - num_frames=4, timesformer_gated_xattn=False, drop_path_rate=0, timesformer_freeze_space=False, - temperature_init=0.07, project_embed_dim=256, **kwargs, -): - vision_model = SpaceTimeTransformer( - img_size=336, patch_size=14, - embed_dim=1024, depth=24, num_heads=16, - num_frames=num_frames, - time_init='zeros', - attention_style='frozen-in-time', - ln_pre=True, - act_layer=QuickGELU, - is_tanh_gating=timesformer_gated_xattn, - drop_path_rate=drop_path_rate, - ) - clip_model, _ = load_openai_clip('ViT-L/14@336px', 'cpu') - print("=> Loading CLIP (ViT-L/14@336px) weights") - remapped_state_dict = remap_keys(clip_model.visual.state_dict(), transformer_layers=24) - res = vision_model.load_state_dict(remapped_state_dict, strict=False) - print(res) - if timesformer_freeze_space: - print("=> Freeze the space part in TimeSformer") - freeze_list, unfreeze_list = [], [] - for n, p in vision_model.named_parameters(): - if n not in remapped_state_dict or n == 'cls_token': - p.requires_grad = True - unfreeze_list.append(n) - else: - p.requires_grad = False - freeze_list.append(n) - print("Freeze the pretrained parts in TimeSformer: {}".format(freeze_list)) - print(" Learn the rest parts in TimeSformer: {}".format(unfreeze_list)) - - vision_model.head = nn.Identity() - vision_model.pre_logits = nn.Identity() - vision_model.fc = nn.Identity() - - text_model = DistilBertModel.from_pretrained( - 'distilbert-base-uncased', - ) - kwargs.pop('text_use_cls_token') # ignore args.use_cls_token since DistilBert does not have pooler on top - model = CLIP_HF( - embed_dim=project_embed_dim, - vision_width=vision_model.embed_dim, - vision_model=vision_model, - text_width=768, - text_model=text_model, - text_use_cls_token=True, # DistilBert does not have pooler on top - text_is_regressive=False, - tempearture_init=temperature_init, - **kwargs, - ) - - return model - - -def CLIP_HF_EGOVLP_DISTILBERT_BASE(num_frames=4, project_embed_dim=256, **kwargs): - vision_model = SpaceTimeTransformer( - num_frames=num_frames, - time_init='zeros', - attention_style='frozen-in-time', - ) - vit_model = timm.models.vision_transformer.vit_base_patch16_224(pretrained=True) - vision_model.load_state_dict(vit_model.state_dict(), strict=False) - vision_model.head = nn.Identity() - vision_model.pre_logits = nn.Identity() - vision_model.fc = nn.Identity() - - text_model = DistilBertModel.from_pretrained( - 'distilbert-base-uncased', - ) - kwargs.pop('text_use_cls_token') # ignore args.use_cls_token since DistilBert does not have pooler on top - kwargs.update({'projection': 'frozen_in_time'}) - model = CLIP_HF( - embed_dim=project_embed_dim, - vision_width=vision_model.embed_dim, - vision_model=vision_model, - text_width=768, - text_model=text_model, - text_use_cls_token=True, # DistilBert does not have pooler on top - text_is_regressive=False, - **kwargs, - ) - - return model - - -def CLIP_HF_TIMESFORMER_DISTILBERT_BASE(num_frames=4, drop_path_rate=0, temperature_init=0.07, project_embed_dim=256, **kwargs): - vision_model = SpaceTimeTransformer( - num_frames=num_frames, - time_init='zeros', - attention_style='frozen-in-time', - drop_path_rate=drop_path_rate, - ) - vit_model = timm.models.vision_transformer.vit_base_patch16_224(pretrained=True) - vision_model.load_state_dict(vit_model.state_dict(), strict=False) - vision_model.head = nn.Identity() - vision_model.pre_logits = nn.Identity() - vision_model.fc = nn.Identity() - - text_model = DistilBertModel.from_pretrained( - 'distilbert-base-uncased', - ) - kwargs.pop('text_use_cls_token') # ignore args.use_cls_token since DistilBert does not have pooler on top - model = CLIP_HF( - embed_dim=project_embed_dim, - vision_width=vision_model.embed_dim, - vision_model=vision_model, - text_width=768, - text_model=text_model, - text_use_cls_token=True, # DistilBert does not have pooler on top - text_is_regressive=False, - tempearture_init=temperature_init, - **kwargs, - ) - - return model - - -def VCLM_OPENAI_VITB16_GPT2_LARGE(gated_xattn=False, freeze_lm_vclm=False, - freeze_visual_vclm=False, freeze_visual_vclm_temporal=False, **kwargs): - clip_model, _ = load_openai_clip('ViT-B/16', 'cpu') - vision_model = clip_model.visual - kwargs.pop('text_use_cls_token') - - gpt2 = GPT2LMHeadModel.from_pretrained( - "gpt2-large", - use_cache=False, - ) - new_config = augment_gpt2_config(gpt2.config, cross_attn_freq=2, gated_xattn=gated_xattn) - text_decoder = GatedGPT2LMHeadModel(new_config) - for n, p in gpt2.named_parameters(): - rsetattr(text_decoder, n + '.data', p.data) - - if freeze_lm_vclm: - print('Freeze the LM part of TextDecoder of VCLM') - text_decoder.freeze_lm_weights() - - if freeze_visual_vclm: - print('Freeze the spatial part of VideoEncoder of VCLM') - vision_model.freeze_spatial_weights() - - if freeze_visual_vclm_temporal: - print('Freeze the temporal part of VideoEncoder of VCLM') - vision_model.freeze_temporal_weights() - - model = VCLM_HF( - vision_width=768, - vision_model=vision_model, - text_width=1280, - text_decoder=text_decoder, - num_img_queries=256, - dim_head=64, - heads=20, - **kwargs, - ) - - return model - - -def VCLM_OPENAI_VITB16_GPT2_XL(gated_xattn=False, freeze_lm_vclm=False, - freeze_visual_vclm=False, freeze_visual_vclm_temporal=False, **kwargs): - clip_model, _ = load_openai_clip('ViT-B/16', 'cpu') - vision_model = clip_model.visual - kwargs.pop('text_use_cls_token') - - gpt2 = GPT2LMHeadModel.from_pretrained( - "gpt2-xl", - use_cache=False, - ) - new_config = augment_gpt2_config(gpt2.config, cross_attn_freq=2, gated_xattn=gated_xattn) - text_decoder = GatedGPT2LMHeadModel(new_config) - for n, p in gpt2.named_parameters(): - rsetattr(text_decoder, n + '.data', p.data) - - if freeze_lm_vclm: - print('Freeze the LM part of TextDecoder of VCLM') - text_decoder.freeze_lm_weights() - - if freeze_visual_vclm: - print('Freeze the spatial part of VideoEncoder of VCLM') - vision_model.freeze_spatial_weights() - - if freeze_visual_vclm_temporal: - print('Freeze the temporal part of VideoEncoder of VCLM') - vision_model.freeze_temporal_weights() - - model = VCLM_HF( - vision_width=768, - vision_model=vision_model, - text_width=1600, - text_decoder=text_decoder, - num_img_queries=256, - dim_head=64, - heads=25, - **kwargs, - ) - - return model - - -def VCLM_OPENAI_VITL14_GPT2_XL(gated_xattn=False, freeze_lm_vclm=False, - freeze_visual_vclm=False, freeze_visual_vclm_temporal=False, **kwargs): - clip_model, _ = load_openai_clip('ViT-L/14', 'cpu') - vision_model = clip_model.visual - kwargs.pop('text_use_cls_token') - - gpt2 = GPT2LMHeadModel.from_pretrained( - "gpt2-xl", - use_cache=False, - ) - new_config = augment_gpt2_config(gpt2.config, cross_attn_freq=2, gated_xattn=gated_xattn) - text_decoder = GatedGPT2LMHeadModel(new_config) - for n, p in gpt2.named_parameters(): - rsetattr(text_decoder, n + '.data', p.data) - - if freeze_lm_vclm: - print('Freeze the LM part of TextDecoder of VCLM') - text_decoder.freeze_lm_weights() - - if freeze_visual_vclm: - print('Freeze the spatial part of VideoEncoder of VCLM') - vision_model.freeze_spatial_weights() - - if freeze_visual_vclm_temporal: - print('Freeze the temporal part of VideoEncoder of VCLM') - vision_model.freeze_temporal_weights() - - model = VCLM_HF( - vision_width=1024, - vision_model=vision_model, - text_width=1600, - text_decoder=text_decoder, - num_img_queries=256, - dim_head=64, - heads=25, - **kwargs, - ) - - return model - - -def VCLM_OPENAI_VITL14_336PX_GPT2_XL(gated_xattn=False, freeze_lm_vclm=False, - freeze_visual_vclm=False, freeze_visual_vclm_temporal=False, **kwargs): - clip_model, _ = load_openai_clip('ViT-L/14@336px', 'cpu') - vision_model = clip_model.visual - kwargs.pop('text_use_cls_token') - - gpt2 = GPT2LMHeadModel.from_pretrained( - "gpt2-xl", - use_cache=False, - ) - new_config = augment_gpt2_config(gpt2.config, cross_attn_freq=2, gated_xattn=gated_xattn) - text_decoder = GatedGPT2LMHeadModel(new_config) - for n, p in gpt2.named_parameters(): - rsetattr(text_decoder, n + '.data', p.data) - - if freeze_lm_vclm: - print('Freeze the LM part of TextDecoder of VCLM') - text_decoder.freeze_lm_weights() - - if freeze_visual_vclm: - print('Freeze the spatial part of VideoEncoder of VCLM') - vision_model.freeze_spatial_weights() - - if freeze_visual_vclm_temporal: - print('Freeze the temporal part of VideoEncoder of VCLM') - vision_model.freeze_temporal_weights() - - model = VCLM_HF( - vision_width=1024, - vision_model=vision_model, - text_width=1600, - text_decoder=text_decoder, - num_img_queries=256, - dim_head=64, - heads=25, - **kwargs, - ) - - return model - - -def VCLM_OPENAI_TIMESFORMER_BASE_GPT2( - gated_xattn=False, - random_init_gpt2=False, - freeze_lm_vclm=False, - freeze_visual_vclm=False, - freeze_visual_vclm_temporal=False, - num_frames=4, - timesformer_gated_xattn=False, - **kwargs, -): - vision_model = SpaceTimeTransformer( - num_frames=num_frames, - time_init='zeros', - attention_style='frozen-in-time', - ln_pre=True, - act_layer=QuickGELU, - is_tanh_gating=timesformer_gated_xattn, - ) - clip_model, _ = load_openai_clip('ViT-B/16', 'cpu') - print("=> Loading CLIP (ViT-B/16) weights") - remapped_state_dict = remap_keys(clip_model.visual.state_dict(), transformer_layers=12) - res = vision_model.load_state_dict(remapped_state_dict, strict=False) - print(res) - vision_model.head = nn.Identity() - vision_model.pre_logits = nn.Identity() - vision_model.fc = nn.Identity() - - gpt2 = GPT2LMHeadModel.from_pretrained( - "gpt2", - use_cache=False, - ) - new_config = augment_gpt2_config(gpt2.config, cross_attn_freq=1, gated_xattn=gated_xattn) - text_decoder = GatedGPT2LMHeadModel(new_config) - if not random_init_gpt2: - print('Loading LM from pretrained weights..') - for n, p in gpt2.named_parameters(): - rsetattr(text_decoder, n + '.data', p.data) - - if freeze_lm_vclm: - print('Freeze the LM part of TextDecoder of VCLM') - text_decoder.freeze_lm_weights() - - if freeze_visual_vclm: - print('Freeze the spatial part of VideoEncoder of VCLM') - vision_model.freeze_spatial_weights() - - if freeze_visual_vclm_temporal: - print('Freeze the temporal part of VideoEncoder of VCLM') - vision_model.freeze_temporal_weights() - - model = VCLM_HF( - vision_width=768, - vision_model=vision_model, - text_width=768, - text_decoder=text_decoder, - num_img_queries=256, - dim_head=64, - heads=12, - **kwargs, - ) - - return model - - -def VCLM_OPENAI_TIMESFORMER_BASE_GPT2_XL( - gated_xattn=False, - freeze_lm_vclm=False, - freeze_visual_vclm=False, - freeze_visual_vclm_temporal=False, - num_frames=4, - timesformer_gated_xattn=False, - **kwargs, -): - vision_model = SpaceTimeTransformer( - num_frames=num_frames, - time_init='zeros', - attention_style='frozen-in-time', - ln_pre=True, - act_layer=QuickGELU, - is_tanh_gating=timesformer_gated_xattn, - ) - clip_model, _ = load_openai_clip('ViT-B/16', 'cpu') - print("=> Loading CLIP (ViT-B/16) weights") - remapped_state_dict = remap_keys(clip_model.visual.state_dict(), transformer_layers=12) - res = vision_model.load_state_dict(remapped_state_dict, strict=False) - print(res) - vision_model.head = nn.Identity() - vision_model.pre_logits = nn.Identity() - vision_model.fc = nn.Identity() - - gpt2 = GPT2LMHeadModel.from_pretrained( - "gpt2-xl", - use_cache=False, - ) - new_config = augment_gpt2_config(gpt2.config, cross_attn_freq=2, gated_xattn=gated_xattn) - text_decoder = GatedGPT2LMHeadModel(new_config) - for n, p in gpt2.named_parameters(): - rsetattr(text_decoder, n + '.data', p.data) - - if freeze_lm_vclm: - print('Freeze the LM part of TextDecoder of VCLM') - text_decoder.freeze_lm_weights() - - if freeze_visual_vclm: - print('Freeze the spatial part of VideoEncoder of VCLM') - vision_model.freeze_spatial_weights() - - if freeze_visual_vclm_temporal: - print('Freeze the temporal part of VideoEncoder of VCLM') - vision_model.freeze_temporal_weights() - - model = VCLM_HF( - vision_width=768, - vision_model=vision_model, - text_width=1600, - text_decoder=text_decoder, - num_img_queries=256, - dim_head=64, - heads=25, - **kwargs, - ) - - return model - - -def VCLM_OPENAI_TIMESFORMER_LARGE_GPT2_XL( - gated_xattn=False, - freeze_lm_vclm=False, - freeze_visual_vclm=False, - freeze_visual_vclm_temporal=False, - num_frames=4, - timesformer_gated_xattn=False, - **kwargs, -): - vision_model = SpaceTimeTransformer( - img_size=224, patch_size=14, - embed_dim=1024, depth=24, num_heads=16, - num_frames=num_frames, - time_init='zeros', - attention_style='frozen-in-time', - ln_pre=True, - act_layer=QuickGELU, - is_tanh_gating=timesformer_gated_xattn, - ) - clip_model, _ = load_openai_clip('ViT-L/14', 'cpu') - print("=> Loading CLIP (ViT-L/14x) weights") - remapped_state_dict = remap_keys(clip_model.visual.state_dict(), transformer_layers=24) - res = vision_model.load_state_dict(remapped_state_dict, strict=False) - print(res) - vision_model.head = nn.Identity() - vision_model.pre_logits = nn.Identity() - vision_model.fc = nn.Identity() - - gpt2 = GPT2LMHeadModel.from_pretrained( - "gpt2-xl", - use_cache=False, - ) - new_config = augment_gpt2_config(gpt2.config, cross_attn_freq=2, gated_xattn=gated_xattn) - text_decoder = GatedGPT2LMHeadModel(new_config) - for n, p in gpt2.named_parameters(): - rsetattr(text_decoder, n + '.data', p.data) - - if freeze_lm_vclm: - print('Freeze the LM part of TextDecoder of VCLM') - text_decoder.freeze_lm_weights() - - if freeze_visual_vclm: - print('Freeze the spatial part of VideoEncoder of VCLM') - vision_model.freeze_spatial_weights() - - if freeze_visual_vclm_temporal: - print('Freeze the temporal part of VideoEncoder of VCLM') - vision_model.freeze_temporal_weights() - - model = VCLM_HF( - vision_width=1024, - vision_model=vision_model, - text_width=1600, - text_decoder=text_decoder, - num_img_queries=256, - dim_head=64, - heads=25, - **kwargs, - ) - - return model - - -def VCLM_OPENAI_TIMESFORMER_LARGE_GPT2( - gated_xattn=False, - freeze_lm_vclm=False, - freeze_visual_vclm=False, - freeze_visual_vclm_temporal=False, - num_frames=4, - timesformer_gated_xattn=False, - **kwargs -): - vision_model = SpaceTimeTransformer( - img_size=224, patch_size=14, - embed_dim=1024, depth=24, num_heads=16, - num_frames=num_frames, - time_init='zeros', - attention_style='frozen-in-time', - ln_pre=True, - act_layer=QuickGELU, - is_tanh_gating=timesformer_gated_xattn, - ) - clip_model, _ = load_openai_clip('ViT-L/14', 'cpu') - print("=> Loading CLIP (ViT-L/14x) weights") - remapped_state_dict = remap_keys(clip_model.visual.state_dict(), transformer_layers=24) - res = vision_model.load_state_dict(remapped_state_dict, strict=False) - print(res) - vision_model.head = nn.Identity() - vision_model.pre_logits = nn.Identity() - vision_model.fc = nn.Identity() - - gpt2 = GPT2LMHeadModel.from_pretrained( - "gpt2", - use_cache=False, - ) - new_config = augment_gpt2_config(gpt2.config, cross_attn_freq=1, gated_xattn=gated_xattn) - text_decoder = GatedGPT2LMHeadModel(new_config) - for n, p in gpt2.named_parameters(): - rsetattr(text_decoder, n + '.data', p.data) - - if freeze_lm_vclm: - print('Freeze the LM part of TextDecoder of VCLM') - text_decoder.freeze_lm_weights() - - if freeze_visual_vclm: - print('Freeze the spatial part of VideoEncoder of VCLM') - vision_model.freeze_spatial_weights() - - if freeze_visual_vclm_temporal: - print('Freeze the temporal part of VideoEncoder of VCLM') - vision_model.freeze_temporal_weights() - - model = VCLM_HF( - vision_width=1024, - vision_model=vision_model, - text_width=768, - text_decoder=text_decoder, - num_img_queries=256, - dim_head=64, - heads=12, - **kwargs, - ) - - return model - - -def VCLM_OPENAI_TIMESFORMER_LARGE_336PX_GPT2_XL( - gated_xattn=False, - freeze_lm_vclm=False, - freeze_visual_vclm=False, - freeze_visual_vclm_temporal=False, - num_frames=4, - timesformer_gated_xattn=False, - **kwargs, -): - vision_model = SpaceTimeTransformer( - img_size=336, patch_size=14, - embed_dim=1024, depth=24, num_heads=16, - num_frames=num_frames, - time_init='zeros', - attention_style='frozen-in-time', - ln_pre=True, - act_layer=QuickGELU, - is_tanh_gating=timesformer_gated_xattn, - ) - clip_model, _ = load_openai_clip('ViT-L/14@336px', 'cpu') - print("=> Loading CLIP (ViT-L/14@336px) weights") - remapped_state_dict = remap_keys(clip_model.visual.state_dict(), transformer_layers=24) - res = vision_model.load_state_dict(remapped_state_dict, strict=False) - print(res) - vision_model.head = nn.Identity() - vision_model.pre_logits = nn.Identity() - vision_model.fc = nn.Identity() - - gpt2 = GPT2LMHeadModel.from_pretrained( - "gpt2-xl", - use_cache=False, - ) - new_config = augment_gpt2_config(gpt2.config, cross_attn_freq=3, gated_xattn=gated_xattn) - text_decoder = GatedGPT2LMHeadModel(new_config) - for n, p in gpt2.named_parameters(): - rsetattr(text_decoder, n + '.data', p.data) - - if freeze_lm_vclm: - print('Freeze the LM part of TextDecoder of VCLM') - text_decoder.freeze_lm_weights() - - if freeze_visual_vclm: - print('Freeze the spatial part of VideoEncoder of VCLM') - vision_model.freeze_spatial_weights() - - if freeze_visual_vclm_temporal: - print('Freeze the temporal part of VideoEncoder of VCLM') - vision_model.freeze_temporal_weights() - - model = VCLM_HF( - vision_width=1024, - vision_model=vision_model, - text_width=1600, - text_decoder=text_decoder, - num_img_queries=256, - dim_head=64, - heads=25, - **kwargs, - ) - - return model - - -def CLIP_OPENAI_VITB32(**kwargs): - model, _ = load_openai_clip('ViT-B/32', 'cpu') - return model - - -def CLIP_OPENAI_VITB16(**kwargs): - model, _ = load_openai_clip('ViT-B/16', 'cpu') - return model - - -def CLIP_OPENAI_VITL14(**kwargs): - model, _ = load_openai_clip('ViT-L/14', 'cpu') - return model - - -def CLIP_OPENAI_VITL14_336PX(**kwargs): - model, _ = load_openai_clip('ViT-L/14@336px', 'cpu') - return model diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cdp Bt Serial Number 3555.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cdp Bt Serial Number 3555.md deleted file mode 100644 index 23969910df730023d4c5153e5c960cfbe2aa5c80..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Cdp Bt Serial Number 3555.md +++ /dev/null @@ -1,15 +0,0 @@ - -

How to Update an Old Autocom CDP+ Interface with Serial Number 3555

-

If you have an old Autocom CDP+ interface with serial number 3555 and you want to use it with the latest software version (2017 r1 or r3), you may encounter some problems. The interface may not be recognized by the software or it may fail the test. In this article, we will show you how to solve these issues and make your interface work with the new software.

-

Cdp Bt Serial Number 3555


Download Zip »»» https://urlcod.com/2uI9DT



-

Step 1: Change the Serial Number and Hardware Key

-

The first thing you need to do is to change the serial number and hardware key of your interface. This is because the old serial number (3555) is not compatible with the new software. You need to use a different serial number that matches the software version you want to use. For example, if you want to use 2017 r3 software, you need to use serial number 30250 and hardware key MNZTTOOCNHVE[^1^]. You can change these values using a special tool that can read and write the EEPROM of your interface. You can find this tool online or ask someone who has it to help you.

-

Step 2: Patch the Software Files

-

The next step is to patch the software files that are installed on your computer. This is necessary to bypass the activation process and make the software recognize your interface. You can find the patch files online or ask someone who has them to share them with you. You need to copy and replace the original files in the installation folder of your software with the patched ones. Make sure you backup the original files before doing this.

-

Step 3: Generate a FileActivation.xml File

-

The final step is to generate a FileActivation.xml file that contains the activation information for your interface. You need this file to activate your software and make it work with your interface. You can generate this file using a special tool that can create it based on your serial number, hardware key and ID. You can find this tool online or ask someone who has it to help you. You need to copy and paste the FileActivation.xml file in the installation folder of your software.

-

-

Conclusion

-

By following these steps, you should be able to update your old Autocom CDP+ interface with serial number 3555 and make it work with the latest software version (2017 r1 or r3). This way, you can enjoy the benefits of using a professional PC based equipment for heavy duty vehicles, commercial transport and cars[^3^]. If you have any questions or problems, feel free to ask for help on online forums or websites that specialize in Autocom products.

e93f5a0c3f
-
-
\ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/News Aggregator Script Nulled Themes _BEST_.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/News Aggregator Script Nulled Themes _BEST_.md deleted file mode 100644 index d5cc6142d8733083f878b149be33b8cf4a3a2f38..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/News Aggregator Script Nulled Themes _BEST_.md +++ /dev/null @@ -1,42 +0,0 @@ -
-

How to Create a News Aggregator Website with Nulled Themes

-

A news aggregator website is a platform that collects and displays news articles from various sources on the web. It can be a great way to provide your visitors with fresh and relevant content, as well as generate traffic and revenue from ads or subscriptions.

-

However, creating a news aggregator website from scratch can be challenging and time-consuming. You need to find reliable news sources, set up a cron job to fetch the news regularly, design a user-friendly interface, and optimize your site for SEO and performance.

-

News Aggregator Script Nulled Themes


DOWNLOAD 🗸🗸🗸 https://urlcod.com/2uI9Yj



-

Fortunately, there is a shortcut that can save you a lot of hassle and money: using nulled themes. Nulled themes are premium WordPress themes that have been cracked or hacked to remove the license verification. They are usually available for free or at a very low cost on various websites.

-

Using nulled themes can help you create a news aggregator website quickly and easily, without having to pay for expensive licenses or hire developers. However, there are also some risks and drawbacks that you should be aware of before using them.

-

The Benefits of Using Nulled Themes for News Aggregator Websites

-

Some of the benefits of using nulled themes for news aggregator websites are:

-
    -
  • You can access hundreds of premium themes that have been designed specifically for news aggregation, with features such as responsive layouts, multiple categories, widgets, sliders, social media integration, etc.
  • -
  • You can customize your site according to your preferences and needs, without having to worry about coding or compatibility issues.
  • -
  • You can save money on theme licenses and updates, as well as hosting and maintenance costs.
  • -
  • You can launch your site faster and start generating traffic and revenue sooner.
  • -
-

The Risks of Using Nulled Themes for News Aggregator Websites

-

Some of the risks of using nulled themes for news aggregator websites are:

-
    -
  • You may violate the intellectual property rights of the original theme developers and face legal consequences.
  • -
  • You may expose your site to malware, viruses, or hacking attacks that can compromise your security and privacy.
  • -
  • You may encounter bugs, errors, or compatibility issues that can affect your site's functionality and performance.
  • -
  • You may miss out on important updates, features, or support from the original theme developers.
  • -
  • You may harm your site's reputation and credibility by using pirated or low-quality themes.
  • -
-

How to Choose a Nulled Theme for Your News Aggregator Website

-

If you decide to use a nulled theme for your news aggregator website, you should be careful and selective in choosing one. Here are some tips to help you find a suitable nulled theme for your site:

-
    -
  • Do some research on the original theme developer and check their reputation, reviews, ratings, etc. Avoid themes that have been developed by unknown or shady sources.
  • -
  • Download the theme from a reputable and trusted website that offers virus-free and malware-free downloads. Avoid websites that have pop-ups, ads, or redirects that may contain malicious links or downloads.
  • -
  • Scan the theme files with an antivirus or anti-malware software before installing them on your site. Delete any suspicious or unwanted files that may contain hidden codes or backdoors.
  • -
  • Test the theme on a local or staging server before deploying it on your live site. Check for any errors, bugs, or compatibility issues that may affect your site's functionality or performance.
  • -
  • Backup your site regularly and keep a copy of the original theme files in case you need to restore them later.
  • -
-

Some Examples of Nulled Themes for News Aggregator Websites

-

To give you some inspiration, here are some examples of nulled themes that you can use for your news aggregator website:

-

- -
    - -
  1. Newspilot - Automatic News Aggregator & Script: This is a PHP script that allows you to create an automated news aggregator website with RSS or ATOM feeds. You can create unlimited categories, sources, widgets, ads, etc. You can also import news manually or automatically with cron jobs. You can download it for free from 0th instance:
    - -
    -
    -
    - -
    -
    - Source Saliency Heatmap -
    - x: Generated tokens, y: Attributed tokens -
    - - - -
    ▁C'est▁un▁boulanger.</s>
    ▁Ő0.8560.2980.0180.304-0.082-0.0060.0190.1180.248
    ▁p0.3330.030.2740.7010.8950.9540.4140.0320.168
    ék0.164-0.028-0.113-0.2970.4350.260.018-0.01-0.299
    .0.3590.141-0.0190.231-0.04-0.1290.0070.9650.053
    </s>0.00.00.00.00.00.00.00.00.0
    -
    - -
    -
    -
    - -
    0th instance:
    - -
    -
    -
    - -
    -
    - Target Saliency Heatmap -
    - x: Generated tokens, y: Attributed tokens -
    - - - -
    ▁C'est▁un▁boulanger.</s>
    ▁C0.9430.7710.366-0.032-0.010.0030.060.081
    '0.5630.209-0.0160.0110.0170.1220.14
    est0.31-0.014-0.0010.0190.1350.075
    ▁un0.0140.0140.0220.0910.435
    ▁bo0.0720.3780.0710.002
    ula0.8270.008-0.032
    nger0.0650.078
    .0.768
    </s>
    -
    - -
    -
    -
    - diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/training/unconditional_training.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/training/unconditional_training.md deleted file mode 100644 index 7a588cc4cc63ab51e1154a10f2f1dfc9c539bd1f..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/training/unconditional_training.md +++ /dev/null @@ -1,146 +0,0 @@ - - -# Unconditional image generation - -Unconditional image generation is not conditioned on any text or images, unlike text- or image-to-image models. It only generates images that resemble its training data distribution. - - - - -This guide will show you how to train an unconditional image generation model on existing datasets as well as your own custom dataset. All the training scripts for unconditional image generation can be found [here](https://github.com/huggingface/diffusers/tree/main/examples/unconditional_image_generation) if you're interested in learning more about the training details. - -Before running the script, make sure you install the library's training dependencies: - -```bash -pip install diffusers[training] accelerate datasets -``` - -Next, initialize an 🤗 [Accelerate](https://github.com/huggingface/accelerate/) environment with: - -```bash -accelerate config -``` - -To setup a default 🤗 Accelerate environment without choosing any configurations: - -```bash -accelerate config default -``` - -Or if your environment doesn't support an interactive shell like a notebook, you can use: - -```bash -from accelerate.utils import write_basic_config - -write_basic_config() -``` - -## Upload model to Hub - -You can upload your model on the Hub by adding the following argument to the training script: - -```bash ---push_to_hub -``` - -## Save and load checkpoints - -It is a good idea to regularly save checkpoints in case anything happens during training. To save a checkpoint, pass the following argument to the training script: - -```bash ---checkpointing_steps=500 -``` - -The full training state is saved in a subfolder in the `output_dir` every 500 steps, which allows you to load a checkpoint and resume training if you pass the `--resume_from_checkpoint` argument to the training script: - -```bash ---resume_from_checkpoint="checkpoint-1500" -``` - -## Finetuning - -You're ready to launch the [training script](https://github.com/huggingface/diffusers/blob/main/examples/unconditional_image_generation/train_unconditional.py) now! Specify the dataset name to finetune on with the `--dataset_name` argument and then save it to the path in `--output_dir`. To use your own dataset, take a look at the [Create a dataset for training](create_dataset) guide. - -The training script creates and saves a `diffusion_pytorch_model.bin` file in your repository. - - - -💡 A full training run takes 2 hours on 4xV100 GPUs. - - - -For example, to finetune on the [Oxford Flowers](https://huggingface.co/datasets/huggan/flowers-102-categories) dataset: - -```bash -accelerate launch train_unconditional.py \ - --dataset_name="huggan/flowers-102-categories" \ - --resolution=64 \ - --output_dir="ddpm-ema-flowers-64" \ - --train_batch_size=16 \ - --num_epochs=100 \ - --gradient_accumulation_steps=1 \ - --learning_rate=1e-4 \ - --lr_warmup_steps=500 \ - --mixed_precision=no \ - --push_to_hub -``` - -
    - -
    - -Or if you want to train your model on the [Pokemon](https://huggingface.co/datasets/huggan/pokemon) dataset: - -```bash -accelerate launch train_unconditional.py \ - --dataset_name="huggan/pokemon" \ - --resolution=64 \ - --output_dir="ddpm-ema-pokemon-64" \ - --train_batch_size=16 \ - --num_epochs=100 \ - --gradient_accumulation_steps=1 \ - --learning_rate=1e-4 \ - --lr_warmup_steps=500 \ - --mixed_precision=no \ - --push_to_hub -``` - -
    - -
    - -### Training with multiple GPUs - -`accelerate` allows for seamless multi-GPU training. Follow the instructions [here](https://huggingface.co/docs/accelerate/basic_tutorials/launch) -for running distributed training with `accelerate`. Here is an example command: - -```bash -accelerate launch --mixed_precision="fp16" --multi_gpu train_unconditional.py \ - --dataset_name="huggan/pokemon" \ - --resolution=64 --center_crop --random_flip \ - --output_dir="ddpm-ema-pokemon-64" \ - --train_batch_size=16 \ - --num_epochs=100 \ - --gradient_accumulation_steps=1 \ - --use_ema \ - --learning_rate=1e-4 \ - --lr_warmup_steps=500 \ - --mixed_precision="fp16" \ - --logger="wandb" \ - --push_to_hub -``` \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/text_to_image/train_text_to_image_flax.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/text_to_image/train_text_to_image_flax.py deleted file mode 100644 index ac3afcbaba12aa404e3bcf544fbc3c7c9bba0d8b..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/examples/text_to_image/train_text_to_image_flax.py +++ /dev/null @@ -1,573 +0,0 @@ -import argparse -import logging -import math -import os -import random -from pathlib import Path - -import jax -import jax.numpy as jnp -import numpy as np -import optax -import torch -import torch.utils.checkpoint -import transformers -from datasets import load_dataset -from flax import jax_utils -from flax.training import train_state -from flax.training.common_utils import shard -from huggingface_hub import create_repo, upload_folder -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPImageProcessor, CLIPTokenizer, FlaxCLIPTextModel, set_seed - -from diffusers import ( - FlaxAutoencoderKL, - FlaxDDPMScheduler, - FlaxPNDMScheduler, - FlaxStableDiffusionPipeline, - FlaxUNet2DConditionModel, -) -from diffusers.pipelines.stable_diffusion import FlaxStableDiffusionSafetyChecker -from diffusers.utils import check_min_version - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.22.0.dev0") - -logger = logging.getLogger(__name__) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--dataset_name", - type=str, - default=None, - help=( - "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private," - " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem," - " or to a folder containing files that 🤗 Datasets can understand." - ), - ) - parser.add_argument( - "--dataset_config_name", - type=str, - default=None, - help="The config of the Dataset, leave as None if there's only one config.", - ) - parser.add_argument( - "--train_data_dir", - type=str, - default=None, - help=( - "A folder containing the training data. Folder contents must follow the structure described in" - " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file" - " must exist to provide the captions for the images. Ignored if `dataset_name` is specified." - ), - ) - parser.add_argument( - "--image_column", type=str, default="image", help="The column of the dataset containing an image." - ) - parser.add_argument( - "--caption_column", - type=str, - default="text", - help="The column of the dataset containing a caption or a list of captions.", - ) - parser.add_argument( - "--max_train_samples", - type=int, - default=None, - help=( - "For debugging purposes or quicker training, truncate the number of training examples to this " - "value if set." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="sd-model-finetuned", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument( - "--cache_dir", - type=str, - default=None, - help="The directory where the downloaded models and datasets will be stored.", - ) - parser.add_argument("--seed", type=int, default=0, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument( - "--random_flip", - action="store_true", - help="whether to randomly flip images horizontally", - ) - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=None, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - # Sanity checks - if args.dataset_name is None and args.train_data_dir is None: - raise ValueError("Need either a dataset name or a training folder.") - - return args - - -dataset_name_mapping = { - "lambdalabs/pokemon-blip-captions": ("image", "text"), -} - - -def get_params_to_save(params): - return jax.device_get(jax.tree_util.tree_map(lambda x: x[0], params)) - - -def main(): - args = parse_args() - - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - # Setup logging, we only want one process per machine to log things on the screen. - logger.setLevel(logging.INFO if jax.process_index() == 0 else logging.ERROR) - if jax.process_index() == 0: - transformers.utils.logging.set_verbosity_info() - else: - transformers.utils.logging.set_verbosity_error() - - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if jax.process_index() == 0: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Get the datasets: you can either provide your own training and evaluation files (see below) - # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub). - - # In distributed training, the load_dataset function guarantees that only one local process can concurrently - # download the dataset. - if args.dataset_name is not None: - # Downloading and loading a dataset from the hub. - dataset = load_dataset( - args.dataset_name, - args.dataset_config_name, - cache_dir=args.cache_dir, - ) - else: - data_files = {} - if args.train_data_dir is not None: - data_files["train"] = os.path.join(args.train_data_dir, "**") - dataset = load_dataset( - "imagefolder", - data_files=data_files, - cache_dir=args.cache_dir, - ) - # See more about loading custom images at - # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder - - # Preprocessing the datasets. - # We need to tokenize inputs and targets. - column_names = dataset["train"].column_names - - # 6. Get the column names for input/target. - dataset_columns = dataset_name_mapping.get(args.dataset_name, None) - if args.image_column is None: - image_column = dataset_columns[0] if dataset_columns is not None else column_names[0] - else: - image_column = args.image_column - if image_column not in column_names: - raise ValueError( - f"--image_column' value '{args.image_column}' needs to be one of: {', '.join(column_names)}" - ) - if args.caption_column is None: - caption_column = dataset_columns[1] if dataset_columns is not None else column_names[1] - else: - caption_column = args.caption_column - if caption_column not in column_names: - raise ValueError( - f"--caption_column' value '{args.caption_column}' needs to be one of: {', '.join(column_names)}" - ) - - # Preprocessing the datasets. - # We need to tokenize input captions and transform the images. - def tokenize_captions(examples, is_train=True): - captions = [] - for caption in examples[caption_column]: - if isinstance(caption, str): - captions.append(caption) - elif isinstance(caption, (list, np.ndarray)): - # take a random caption if there are multiple - captions.append(random.choice(caption) if is_train else caption[0]) - else: - raise ValueError( - f"Caption column `{caption_column}` should contain either strings or lists of strings." - ) - inputs = tokenizer(captions, max_length=tokenizer.model_max_length, padding="do_not_pad", truncation=True) - input_ids = inputs.input_ids - return input_ids - - train_transforms = transforms.Compose( - [ - transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), - transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def preprocess_train(examples): - images = [image.convert("RGB") for image in examples[image_column]] - examples["pixel_values"] = [train_transforms(image) for image in images] - examples["input_ids"] = tokenize_captions(examples) - - return examples - - if args.max_train_samples is not None: - dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples)) - # Set the training transforms - train_dataset = dataset["train"].with_transform(preprocess_train) - - def collate_fn(examples): - pixel_values = torch.stack([example["pixel_values"] for example in examples]) - pixel_values = pixel_values.to(memory_format=torch.contiguous_format).float() - input_ids = [example["input_ids"] for example in examples] - - padded_tokens = tokenizer.pad( - {"input_ids": input_ids}, padding="max_length", max_length=tokenizer.model_max_length, return_tensors="pt" - ) - batch = { - "pixel_values": pixel_values, - "input_ids": padded_tokens.input_ids, - } - batch = {k: v.numpy() for k, v in batch.items()} - - return batch - - total_train_batch_size = args.train_batch_size * jax.local_device_count() - train_dataloader = torch.utils.data.DataLoader( - train_dataset, shuffle=True, collate_fn=collate_fn, batch_size=total_train_batch_size, drop_last=True - ) - - weight_dtype = jnp.float32 - if args.mixed_precision == "fp16": - weight_dtype = jnp.float16 - elif args.mixed_precision == "bf16": - weight_dtype = jnp.bfloat16 - - # Load models and create wrapper for stable diffusion - tokenizer = CLIPTokenizer.from_pretrained( - args.pretrained_model_name_or_path, revision=args.revision, subfolder="tokenizer" - ) - text_encoder = FlaxCLIPTextModel.from_pretrained( - args.pretrained_model_name_or_path, revision=args.revision, subfolder="text_encoder", dtype=weight_dtype - ) - vae, vae_params = FlaxAutoencoderKL.from_pretrained( - args.pretrained_model_name_or_path, revision=args.revision, subfolder="vae", dtype=weight_dtype - ) - unet, unet_params = FlaxUNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, revision=args.revision, subfolder="unet", dtype=weight_dtype - ) - - # Optimization - if args.scale_lr: - args.learning_rate = args.learning_rate * total_train_batch_size - - constant_scheduler = optax.constant_schedule(args.learning_rate) - - adamw = optax.adamw( - learning_rate=constant_scheduler, - b1=args.adam_beta1, - b2=args.adam_beta2, - eps=args.adam_epsilon, - weight_decay=args.adam_weight_decay, - ) - - optimizer = optax.chain( - optax.clip_by_global_norm(args.max_grad_norm), - adamw, - ) - - state = train_state.TrainState.create(apply_fn=unet.__call__, params=unet_params, tx=optimizer) - - noise_scheduler = FlaxDDPMScheduler( - beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000 - ) - noise_scheduler_state = noise_scheduler.create_state() - - # Initialize our training - rng = jax.random.PRNGKey(args.seed) - train_rngs = jax.random.split(rng, jax.local_device_count()) - - def train_step(state, text_encoder_params, vae_params, batch, train_rng): - dropout_rng, sample_rng, new_train_rng = jax.random.split(train_rng, 3) - - def compute_loss(params): - # Convert images to latent space - vae_outputs = vae.apply( - {"params": vae_params}, batch["pixel_values"], deterministic=True, method=vae.encode - ) - latents = vae_outputs.latent_dist.sample(sample_rng) - # (NHWC) -> (NCHW) - latents = jnp.transpose(latents, (0, 3, 1, 2)) - latents = latents * vae.config.scaling_factor - - # Sample noise that we'll add to the latents - noise_rng, timestep_rng = jax.random.split(sample_rng) - noise = jax.random.normal(noise_rng, latents.shape) - # Sample a random timestep for each image - bsz = latents.shape[0] - timesteps = jax.random.randint( - timestep_rng, - (bsz,), - 0, - noise_scheduler.config.num_train_timesteps, - ) - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(noise_scheduler_state, latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder( - batch["input_ids"], - params=text_encoder_params, - train=False, - )[0] - - # Predict the noise residual and compute loss - model_pred = unet.apply( - {"params": params}, noisy_latents, timesteps, encoder_hidden_states, train=True - ).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(noise_scheduler_state, latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - loss = (target - model_pred) ** 2 - loss = loss.mean() - - return loss - - grad_fn = jax.value_and_grad(compute_loss) - loss, grad = grad_fn(state.params) - grad = jax.lax.pmean(grad, "batch") - - new_state = state.apply_gradients(grads=grad) - - metrics = {"loss": loss} - metrics = jax.lax.pmean(metrics, axis_name="batch") - - return new_state, metrics, new_train_rng - - # Create parallel version of the train step - p_train_step = jax.pmap(train_step, "batch", donate_argnums=(0,)) - - # Replicate the train state on each device - state = jax_utils.replicate(state) - text_encoder_params = jax_utils.replicate(text_encoder.params) - vae_params = jax_utils.replicate(vae_params) - - # Train! - num_update_steps_per_epoch = math.ceil(len(train_dataloader)) - - # Scheduler and math around the number of training steps. - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel & distributed) = {total_train_batch_size}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - - global_step = 0 - - epochs = tqdm(range(args.num_train_epochs), desc="Epoch ... ", position=0) - for epoch in epochs: - # ======================== Training ================================ - - train_metrics = [] - - steps_per_epoch = len(train_dataset) // total_train_batch_size - train_step_progress_bar = tqdm(total=steps_per_epoch, desc="Training...", position=1, leave=False) - # train - for batch in train_dataloader: - batch = shard(batch) - state, train_metric, train_rngs = p_train_step(state, text_encoder_params, vae_params, batch, train_rngs) - train_metrics.append(train_metric) - - train_step_progress_bar.update(1) - - global_step += 1 - if global_step >= args.max_train_steps: - break - - train_metric = jax_utils.unreplicate(train_metric) - - train_step_progress_bar.close() - epochs.write(f"Epoch... ({epoch + 1}/{args.num_train_epochs} | Loss: {train_metric['loss']})") - - # Create the pipeline using using the trained modules and save it. - if jax.process_index() == 0: - scheduler = FlaxPNDMScheduler( - beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", skip_prk_steps=True - ) - safety_checker = FlaxStableDiffusionSafetyChecker.from_pretrained( - "CompVis/stable-diffusion-safety-checker", from_pt=True - ) - pipeline = FlaxStableDiffusionPipeline( - text_encoder=text_encoder, - vae=vae, - unet=unet, - tokenizer=tokenizer, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=CLIPImageProcessor.from_pretrained("openai/clip-vit-base-patch32"), - ) - - pipeline.save_pretrained( - args.output_dir, - params={ - "text_encoder": get_params_to_save(text_encoder_params), - "vae": get_params_to_save(vae_params), - "unet": get_params_to_save(state.params), - "safety_checker": safety_checker.params, - }, - ) - - if args.push_to_hub: - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/utils/grad_cam.py b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/utils/grad_cam.py deleted file mode 100644 index 5c4ca9eb9774e062115f829badda5b0c9718f508..0000000000000000000000000000000000000000 --- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/utils/grad_cam.py +++ /dev/null @@ -1,183 +0,0 @@ -from typing import Callable, Iterable, Tuple -import torch -import numpy as np -import PIL.Image -import cv2 -import wandb - -from tqdm import tqdm -from pytorch_grad_cam import GradCAM - -from utils.val_loop_hook import ValidationLoopHook - -def _get_grad_cam_target(model): - """ - Determines the appropriate GradCAM target. - """ - - # very naive check - if hasattr(model, "features"): - return getattr(model, "features") - - pooling = [torch.nn.AdaptiveAvgPool1d, torch.nn.AvgPool1d, torch.nn.MaxPool1d, torch.nn.AdaptiveMaxPool1d, - torch.nn.AdaptiveAvgPool2d, torch.nn.AvgPool2d, torch.nn.MaxPool2d, torch.nn.AdaptiveMaxPool2d, - torch.nn.AdaptiveAvgPool3d, torch.nn.AvgPool3d, torch.nn.MaxPool3d, torch.nn.AdaptiveMaxPool3d] - convolutions = [torch.nn.Conv1d, torch.nn.Conv2d, torch.nn.Conv3d] - - # reverse search starting from the final module - inverted_modules = list(model.modules())[::-1] - for i, module in enumerate(inverted_modules): - if any([isinstance(module, po) for po in pooling]): - # if a pooling layer was hit, pick the module directly before it - return inverted_modules[i+1] - elif any([isinstance(module, co) for co in convolutions]): - # if a convolution was hit (but no pooling layer), pick that one instead - return module - elif isinstance(module, torch.nn.Sequential): - # if a sequential module is hit, explore it - for child in list(module.children())[::-1]: - sequential_result = _get_grad_cam_target(child) - if sequential_result is not None: - return sequential_result - -def _show_cam_on_image(img: np.ndarray, mask: np.ndarray, use_rgb: bool = False, colormap: int = cv2.COLORMAP_JET) -> np.ndarray: - """ This function overlays the cam mask on the image as an heatmap. - By default the heatmap is in BGR format. - :param img: The base image in RGB or BGR format. - :param mask: The cam mask. - :param use_rgb: Whether to use an RGB or BGR heatmap, this should be set to True if 'img' is in RGB format. - :param colormap: The OpenCV colormap to be used. - :returns: The default image with the cam overlay. - """ - heatmap = cv2.applyColorMap(np.uint8(255 * mask), colormap) - if use_rgb: - heatmap = cv2.cvtColor(heatmap, cv2.COLOR_BGR2RGB) - heatmap = np.float32(heatmap) / 255 - - normalize = lambda x: (x - np.min(x))/np.ptp(x) - - cam = 0.6 * heatmap + normalize(img) - cam = cam / np.max(cam) - return np.uint8(255 * cam) - -def _strip_image_from_grid_row(row, gap=5, bg=255): - strip = torch.full( - (row.shape[0] * (row.shape[3] + gap) - gap, - row.shape[1] * (row.shape[3] + gap) - gap, - row.shape[4]), bg, dtype=row.dtype) - for i in range(0, row.shape[0] * row.shape[1]): - strip[(i // row.shape[1]) * (row.shape[2] + gap) : ((i // row.shape[1])+1) * (row.shape[2] + gap) - gap, - (i % row.shape[1]) * (row.shape[3] + gap) : ((i % row.shape[1])+1) * (row.shape[3] + gap) - gap, - :] = row[i // row.shape[1]][i % row.shape[1]] - return PIL.Image.fromarray(strip.numpy()) - - -class GradCAMBuilder(ValidationLoopHook): - def __init__(self, image_shape: Iterable[int], target_category: int = None, num_images: int = 5): - self.image_shape = image_shape - self.target_category = target_category - self.num_images = num_images - - self.targets = torch.zeros(self.num_images) - self.activations = torch.zeros(self.num_images) - self.images = torch.zeros(torch.Size([self.num_images]) + torch.Size(self.image_shape)) - - def process(self, batch, target_batch, logits_batch, prediction_batch): - image_batch = batch["image"] - - with torch.no_grad(): - if self.target_category is not None: - local_activations = logits_batch[:, self.target_category] - else: - local_activations = torch.amax(logits_batch, dim=-1) - - # filter samples where the prediction lines up with the target - target_match = (prediction_batch == target_batch) - - # filter public dataset samples - public = torch.tensor(["verse" in id for id in batch["verse_id"]]).type_as(target_match) - - mask = target_match & public - - if torch.max(mask) == False: - # no samples match criteria in this batch, skip - return - - # identify better activations and replace them accordingly - local_top_idx = torch.argsort(local_activations, descending=True) - # filter samples - local_top_idx = local_top_idx[mask[local_top_idx]] - current_idx = 0 - - while current_idx < self.num_images and local_activations[local_top_idx[current_idx]] > torch.min(self.activations): - # next item in local batch matches criteria and has a higher activation than one in the global batch, replace it - idx_to_replace = torch.argsort(self.activations)[0] - - self.activations[idx_to_replace] = local_activations[local_top_idx[current_idx]] - self.images[idx_to_replace] = image_batch[local_top_idx[current_idx]] - self.targets[idx_to_replace] = target_batch[local_top_idx[current_idx]] - - current_idx += 1 - - def trigger(self, module): - model = module.backbone - module.eval() - - # determine the Grad-CAM target module/layer - grad_cam_target = _get_grad_cam_target(model) - - cam = GradCAM(model, [grad_cam_target], use_cuda=torch.cuda.is_available()) - - # determine final order such that the highest activations are placed on top - sorted_idx = torch.argsort(self.activations, descending=True) - - self.activations = self.activations[sorted_idx] - self.images = self.images[sorted_idx] - self.targets = self.targets[sorted_idx] - - # if a polyaxon experiment crashes here, remove the GradCAMBuilder instance from the - # model.validation_hooks list - grad_cams = cam(input_tensor=self.images, target_category=self.target_category) - - module.train() - - if len(self.images.shape) == 5: - # 3D, visualize slices - ld_res = grad_cams.shape[-1] - img_res = self.images.shape[-1] - img_slices = torch.linspace(int(img_res/ld_res/2), img_res-int(img_res/ld_res/2), ld_res, dtype=torch.long) - - # Show all images slices in a larger combined image - grad_cams_image = _strip_image_from_grid_row( - torch.stack([ - torch.stack([ - torch.tensor( - _show_cam_on_image((self.images[i, 0, ..., img_slices[s]]).unsqueeze(-1).repeat(1, 1, 3).numpy(), grad_cams[i, ..., s], use_rgb=True) - ) - for s in range(grad_cams.shape[-1])]) - for i in range(self.num_images if self.num_images < grad_cams.shape[0] else grad_cams.shape[0])]) - ) - - elif len(self.images.shape) == 4: - # 2D - grad_cams_image = _strip_image_from_grid_row( - torch.stack([ - torch.stack([ - torch.tensor( - _show_cam_on_image((self.images[i, 0, ...]).unsqueeze(-1).repeat(1, 1, 3).numpy(), grad_cams[i, ...], use_rgb=True) - ) - ]) - for i in range(self.num_images if self.num_images < grad_cams.shape[0] else grad_cams.shape[0])]) - ) - - else: - raise RuntimeError("Attempting to build Grad-CAMs for data that is neither 2D nor 3D") - - module.logger.experiment.log({ - "val/grad_cam": wandb.Image(grad_cams_image) - }) - - def reset(self): - self.targets = torch.zeros(self.num_images) - self.activations = torch.zeros(self.num_images) - self.images = torch.zeros(torch.Size([self.num_images]) + torch.Size(self.image_shape)) \ No newline at end of file diff --git a/spaces/peteralexandercharles/automatic-speech-recognition-with-next-gen-kaldi/test_wavs/wenetspeech/README.md b/spaces/peteralexandercharles/automatic-speech-recognition-with-next-gen-kaldi/test_wavs/wenetspeech/README.md deleted file mode 100644 index 6a2aac877892426c5fa3c90a1dfc4cac93fa2ed8..0000000000000000000000000000000000000000 --- a/spaces/peteralexandercharles/automatic-speech-recognition-with-next-gen-kaldi/test_wavs/wenetspeech/README.md +++ /dev/null @@ -1,2 +0,0 @@ -Files are downloaded from -https://huggingface.co/luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2/tree/main/test_wavs diff --git a/spaces/phenomenon1981/DreamlikeArt-Diffusion-1.0/app.py b/spaces/phenomenon1981/DreamlikeArt-Diffusion-1.0/app.py deleted file mode 100644 index aef272dd9b2b4de0f8de0324edb32113d90b5e76..0000000000000000000000000000000000000000 --- a/spaces/phenomenon1981/DreamlikeArt-Diffusion-1.0/app.py +++ /dev/null @@ -1,154 +0,0 @@ -import gradio as gr -import os -import sys -from pathlib import Path -import random -import string -import time -from queue import Queue -from threading import Thread -import emoji - -text_gen=gr.Interface.load("spaces/phenomenon1981/MagicPrompt-Stable-Diffusion") -def get_prompts(prompt_text): - if prompt_text: - return text_gen("dreamlikeart, " + prompt_text) - else: - return text_gen("") -proc1=gr.Interface.load("models/dreamlike-art/dreamlike-diffusion-1.0") - -def restart_script_periodically(): - while True: - random_time = random.randint(540, 600) - time.sleep(random_time) - os.execl(sys.executable, sys.executable, *sys.argv) - - -restart_thread = Thread(target=restart_script_periodically, daemon=True) -restart_thread.start() - - -queue = Queue() -queue_threshold = 100 - -def add_random_noise(prompt, noise_level=0.00): - if noise_level == 0: - noise_level = 0.00 - percentage_noise = noise_level * 5 - num_noise_chars = int(len(prompt) * (percentage_noise/100)) - noise_indices = random.sample(range(len(prompt)), num_noise_chars) - prompt_list = list(prompt) - noise_chars = list(string.ascii_letters + string.punctuation + ' ' + string.digits) - noise_chars.extend(['😍', '💩', '😂', '🤔', '😊', '🤗', '😭', '🙄', '😷', '🤯', '🤫', '🥴', '😴', '🤩', '🥳', '😔', '😩', '🤪', '😇', '🤢', '😈', '👹', '👻', '🤖', '👽', '💀', '🎃', '🎅', '🎄', '🎁', '🎂', '🎉', '🎈', '🎊', '🎮', '❤️', '💔', '💕', '💖', '💗', '🐶', '🐱', '🐭', '🐹', '🦊', '🐻', '🐨', '🐯', '🦁', '🐘', '🔥', '🌧️', '🌞', '🌈', '💥', '🌴', '🌊', '🌺', '🌻', '🌸', '🎨', '🌅', '🌌', '☁️', '⛈️', '❄️', '☀️', '🌤️', '⛅️', '🌥️', '🌦️', '🌧️', '🌩️', '🌨️', '🌫️', '☔️', '🌬️', '💨', '🌪️', '🌈']) - for index in noise_indices: - prompt_list[index] = random.choice(noise_chars) - return "".join(prompt_list) - - - -def send_it1(inputs, noise_level, proc1=proc1): - prompt_with_noise = add_random_noise(inputs, noise_level) - while queue.qsize() >= queue_threshold: - time.sleep(2) - queue.put(prompt_with_noise) - output1 = proc1(prompt_with_noise) - return output1 - -def send_it2(inputs, noise_level, proc1=proc1): - prompt_with_noise = add_random_noise(inputs, noise_level) - while queue.qsize() >= queue_threshold: - time.sleep(2) - queue.put(prompt_with_noise) - output2 = proc1(prompt_with_noise) - return output2 - -#def send_it3(inputs, noise_level, proc1=proc1): - #prompt_with_noise = add_random_noise(inputs, noise_level) - #while queue.qsize() >= queue_threshold: - #time.sleep(2) - #queue.put(prompt_with_noise) - #output3 = proc1(prompt_with_noise) - #return output3 - -#def send_it4(inputs, noise_level, proc1=proc1): - #prompt_with_noise = add_random_noise(inputs, noise_level) - #while queue.qsize() >= queue_threshold: - #time.sleep(2) - #queue.put(prompt_with_noise) - #output4 = proc1(prompt_with_noise) - #return output4 - - -with gr.Blocks(css='style.css') as demo: - gr.HTML( - """ -
    - """ - ) - with gr.Column(elem_id="col-container"): - with gr.Row(variant="compact"): - input_text = gr.Textbox( - label="Short Prompt", - show_label=False, - max_lines=2, - placeholder="Enter a basic idea and click 'Magic Prompt'. Got no ideas? No problem, Simply just hit the magic button!", - ).style( - container=False, - ) - see_prompts = gr.Button("✨ Magic Prompt ✨").style(full_width=False) - - - with gr.Row(variant="compact"): - prompt = gr.Textbox( - label="Enter your prompt", - show_label=False, - max_lines=2, - placeholder="Full Prompt", - ).style( - container=False, - ) - run = gr.Button("Generate Images").style(full_width=False) - - with gr.Row(): - with gr.Row(): - noise_level = gr.Slider(minimum=0.0, maximum=3, step=0.1, label="Noise Level") - with gr.Row(): - with gr.Row(): - output1=gr.Image(label="Dreamlike Diffusion 1.0",show_label=False) - output2=gr.Image(label="Dreamlike Diffusion 1.0",show_label=False) - - - see_prompts.click(get_prompts, inputs=[input_text], outputs=[prompt], queue=False) - run.click(send_it1, inputs=[prompt, noise_level], outputs=[output1]) - run.click(send_it2, inputs=[prompt, noise_level], outputs=[output2]) - - - with gr.Row(): - gr.HTML( - """ - -
    -

    Unleash your creative side and generate mesmerizing images with just a few clicks! Enter a spark of inspiration in the "Basic Idea" text box and click the "Magic Prompt" button to elevate it to a polished masterpiece. Make any final tweaks in the "Full Prompt" box and hit the "Generate Images" button to watch your vision come to life. Experiment with the "Noise Level" for a diverse range of outputs, from similar to wildly unique. Let the fun begin! -

    -
    - """ -) - - demo.launch(enable_queue=True, inline=True) - block.queue(concurrency_count=100) \ No newline at end of file diff --git a/spaces/pierreguillou/question-answering-portuguese-t5-base/README.md b/spaces/pierreguillou/question-answering-portuguese-t5-base/README.md deleted file mode 100644 index 4a24c78a4e047e6dbfdd04a03649133c9e18ca0d..0000000000000000000000000000000000000000 --- a/spaces/pierreguillou/question-answering-portuguese-t5-base/README.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: Question Answering Portuguese T5 Base -emoji: ⚡ -colorFrom: blue -colorTo: indigo -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/priyankasharma5882/Breed_Classification/README.md b/spaces/priyankasharma5882/Breed_Classification/README.md deleted file mode 100644 index adf77ca7e4bb1de652cd1315a2bfe08deee7b5c8..0000000000000000000000000000000000000000 --- a/spaces/priyankasharma5882/Breed_Classification/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Breed_Classification -emoji: 😻 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.0.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/merge/layout.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/merge/layout.py deleted file mode 100644 index 6b85cd503387291f326e937b36a5739b1de23ef1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/merge/layout.py +++ /dev/null @@ -1,530 +0,0 @@ -# Copyright 2013 Google, Inc. All Rights Reserved. -# -# Google Author(s): Behdad Esfahbod, Roozbeh Pournader - -from fontTools import ttLib -from fontTools.ttLib.tables.DefaultTable import DefaultTable -from fontTools.ttLib.tables import otTables -from fontTools.merge.base import add_method, mergeObjects -from fontTools.merge.util import * -import logging - - -log = logging.getLogger("fontTools.merge") - - -def mergeLookupLists(lst): - # TODO Do smarter merge. - return sumLists(lst) - - -def mergeFeatures(lst): - assert lst - self = otTables.Feature() - self.FeatureParams = None - self.LookupListIndex = mergeLookupLists( - [l.LookupListIndex for l in lst if l.LookupListIndex] - ) - self.LookupCount = len(self.LookupListIndex) - return self - - -def mergeFeatureLists(lst): - d = {} - for l in lst: - for f in l: - tag = f.FeatureTag - if tag not in d: - d[tag] = [] - d[tag].append(f.Feature) - ret = [] - for tag in sorted(d.keys()): - rec = otTables.FeatureRecord() - rec.FeatureTag = tag - rec.Feature = mergeFeatures(d[tag]) - ret.append(rec) - return ret - - -def mergeLangSyses(lst): - assert lst - - # TODO Support merging ReqFeatureIndex - assert all(l.ReqFeatureIndex == 0xFFFF for l in lst) - - self = otTables.LangSys() - self.LookupOrder = None - self.ReqFeatureIndex = 0xFFFF - self.FeatureIndex = mergeFeatureLists( - [l.FeatureIndex for l in lst if l.FeatureIndex] - ) - self.FeatureCount = len(self.FeatureIndex) - return self - - -def mergeScripts(lst): - assert lst - - if len(lst) == 1: - return lst[0] - langSyses = {} - for sr in lst: - for lsr in sr.LangSysRecord: - if lsr.LangSysTag not in langSyses: - langSyses[lsr.LangSysTag] = [] - langSyses[lsr.LangSysTag].append(lsr.LangSys) - lsrecords = [] - for tag, langSys_list in sorted(langSyses.items()): - lsr = otTables.LangSysRecord() - lsr.LangSys = mergeLangSyses(langSys_list) - lsr.LangSysTag = tag - lsrecords.append(lsr) - - self = otTables.Script() - self.LangSysRecord = lsrecords - self.LangSysCount = len(lsrecords) - dfltLangSyses = [s.DefaultLangSys for s in lst if s.DefaultLangSys] - if dfltLangSyses: - self.DefaultLangSys = mergeLangSyses(dfltLangSyses) - else: - self.DefaultLangSys = None - return self - - -def mergeScriptRecords(lst): - d = {} - for l in lst: - for s in l: - tag = s.ScriptTag - if tag not in d: - d[tag] = [] - d[tag].append(s.Script) - ret = [] - for tag in sorted(d.keys()): - rec = otTables.ScriptRecord() - rec.ScriptTag = tag - rec.Script = mergeScripts(d[tag]) - ret.append(rec) - return ret - - -otTables.ScriptList.mergeMap = { - "ScriptCount": lambda lst: None, # TODO - "ScriptRecord": mergeScriptRecords, -} -otTables.BaseScriptList.mergeMap = { - "BaseScriptCount": lambda lst: None, # TODO - # TODO: Merge duplicate entries - "BaseScriptRecord": lambda lst: sorted( - sumLists(lst), key=lambda s: s.BaseScriptTag - ), -} - -otTables.FeatureList.mergeMap = { - "FeatureCount": sum, - "FeatureRecord": lambda lst: sorted(sumLists(lst), key=lambda s: s.FeatureTag), -} - -otTables.LookupList.mergeMap = { - "LookupCount": sum, - "Lookup": sumLists, -} - -otTables.Coverage.mergeMap = { - "Format": min, - "glyphs": sumLists, -} - -otTables.ClassDef.mergeMap = { - "Format": min, - "classDefs": sumDicts, -} - -otTables.LigCaretList.mergeMap = { - "Coverage": mergeObjects, - "LigGlyphCount": sum, - "LigGlyph": sumLists, -} - -otTables.AttachList.mergeMap = { - "Coverage": mergeObjects, - "GlyphCount": sum, - "AttachPoint": sumLists, -} - -# XXX Renumber MarkFilterSets of lookups -otTables.MarkGlyphSetsDef.mergeMap = { - "MarkSetTableFormat": equal, - "MarkSetCount": sum, - "Coverage": sumLists, -} - -otTables.Axis.mergeMap = { - "*": mergeObjects, -} - -# XXX Fix BASE table merging -otTables.BaseTagList.mergeMap = { - "BaseTagCount": sum, - "BaselineTag": sumLists, -} - -otTables.GDEF.mergeMap = ( - otTables.GSUB.mergeMap -) = ( - otTables.GPOS.mergeMap -) = otTables.BASE.mergeMap = otTables.JSTF.mergeMap = otTables.MATH.mergeMap = { - "*": mergeObjects, - "Version": max, -} - -ttLib.getTableClass("GDEF").mergeMap = ttLib.getTableClass( - "GSUB" -).mergeMap = ttLib.getTableClass("GPOS").mergeMap = ttLib.getTableClass( - "BASE" -).mergeMap = ttLib.getTableClass( - "JSTF" -).mergeMap = ttLib.getTableClass( - "MATH" -).mergeMap = { - "tableTag": onlyExisting(equal), # XXX clean me up - "table": mergeObjects, -} - - -@add_method(ttLib.getTableClass("GSUB")) -def merge(self, m, tables): - assert len(tables) == len(m.duplicateGlyphsPerFont) - for i, (table, dups) in enumerate(zip(tables, m.duplicateGlyphsPerFont)): - if not dups: - continue - if table is None or table is NotImplemented: - log.warning( - "Have non-identical duplicates to resolve for '%s' but no GSUB. Are duplicates intended?: %s", - m.fonts[i]._merger__name, - dups, - ) - continue - - synthFeature = None - synthLookup = None - for script in table.table.ScriptList.ScriptRecord: - if script.ScriptTag == "DFLT": - continue # XXX - for langsys in [script.Script.DefaultLangSys] + [ - l.LangSys for l in script.Script.LangSysRecord - ]: - if langsys is None: - continue # XXX Create! - feature = [v for v in langsys.FeatureIndex if v.FeatureTag == "locl"] - assert len(feature) <= 1 - if feature: - feature = feature[0] - else: - if not synthFeature: - synthFeature = otTables.FeatureRecord() - synthFeature.FeatureTag = "locl" - f = synthFeature.Feature = otTables.Feature() - f.FeatureParams = None - f.LookupCount = 0 - f.LookupListIndex = [] - table.table.FeatureList.FeatureRecord.append(synthFeature) - table.table.FeatureList.FeatureCount += 1 - feature = synthFeature - langsys.FeatureIndex.append(feature) - langsys.FeatureIndex.sort(key=lambda v: v.FeatureTag) - - if not synthLookup: - subtable = otTables.SingleSubst() - subtable.mapping = dups - synthLookup = otTables.Lookup() - synthLookup.LookupFlag = 0 - synthLookup.LookupType = 1 - synthLookup.SubTableCount = 1 - synthLookup.SubTable = [subtable] - if table.table.LookupList is None: - # mtiLib uses None as default value for LookupList, - # while feaLib points to an empty array with count 0 - # TODO: make them do the same - table.table.LookupList = otTables.LookupList() - table.table.LookupList.Lookup = [] - table.table.LookupList.LookupCount = 0 - table.table.LookupList.Lookup.append(synthLookup) - table.table.LookupList.LookupCount += 1 - - if feature.Feature.LookupListIndex[:1] != [synthLookup]: - feature.Feature.LookupListIndex[:0] = [synthLookup] - feature.Feature.LookupCount += 1 - - DefaultTable.merge(self, m, tables) - return self - - -@add_method( - otTables.SingleSubst, - otTables.MultipleSubst, - otTables.AlternateSubst, - otTables.LigatureSubst, - otTables.ReverseChainSingleSubst, - otTables.SinglePos, - otTables.PairPos, - otTables.CursivePos, - otTables.MarkBasePos, - otTables.MarkLigPos, - otTables.MarkMarkPos, -) -def mapLookups(self, lookupMap): - pass - - -# Copied and trimmed down from subset.py -@add_method( - otTables.ContextSubst, - otTables.ChainContextSubst, - otTables.ContextPos, - otTables.ChainContextPos, -) -def __merge_classify_context(self): - class ContextHelper(object): - def __init__(self, klass, Format): - if klass.__name__.endswith("Subst"): - Typ = "Sub" - Type = "Subst" - else: - Typ = "Pos" - Type = "Pos" - if klass.__name__.startswith("Chain"): - Chain = "Chain" - else: - Chain = "" - ChainTyp = Chain + Typ - - self.Typ = Typ - self.Type = Type - self.Chain = Chain - self.ChainTyp = ChainTyp - - self.LookupRecord = Type + "LookupRecord" - - if Format == 1: - self.Rule = ChainTyp + "Rule" - self.RuleSet = ChainTyp + "RuleSet" - elif Format == 2: - self.Rule = ChainTyp + "ClassRule" - self.RuleSet = ChainTyp + "ClassSet" - - if self.Format not in [1, 2, 3]: - return None # Don't shoot the messenger; let it go - if not hasattr(self.__class__, "_merge__ContextHelpers"): - self.__class__._merge__ContextHelpers = {} - if self.Format not in self.__class__._merge__ContextHelpers: - helper = ContextHelper(self.__class__, self.Format) - self.__class__._merge__ContextHelpers[self.Format] = helper - return self.__class__._merge__ContextHelpers[self.Format] - - -@add_method( - otTables.ContextSubst, - otTables.ChainContextSubst, - otTables.ContextPos, - otTables.ChainContextPos, -) -def mapLookups(self, lookupMap): - c = self.__merge_classify_context() - - if self.Format in [1, 2]: - for rs in getattr(self, c.RuleSet): - if not rs: - continue - for r in getattr(rs, c.Rule): - if not r: - continue - for ll in getattr(r, c.LookupRecord): - if not ll: - continue - ll.LookupListIndex = lookupMap[ll.LookupListIndex] - elif self.Format == 3: - for ll in getattr(self, c.LookupRecord): - if not ll: - continue - ll.LookupListIndex = lookupMap[ll.LookupListIndex] - else: - assert 0, "unknown format: %s" % self.Format - - -@add_method(otTables.ExtensionSubst, otTables.ExtensionPos) -def mapLookups(self, lookupMap): - if self.Format == 1: - self.ExtSubTable.mapLookups(lookupMap) - else: - assert 0, "unknown format: %s" % self.Format - - -@add_method(otTables.Lookup) -def mapLookups(self, lookupMap): - for st in self.SubTable: - if not st: - continue - st.mapLookups(lookupMap) - - -@add_method(otTables.LookupList) -def mapLookups(self, lookupMap): - for l in self.Lookup: - if not l: - continue - l.mapLookups(lookupMap) - - -@add_method(otTables.Lookup) -def mapMarkFilteringSets(self, markFilteringSetMap): - if self.LookupFlag & 0x0010: - self.MarkFilteringSet = markFilteringSetMap[self.MarkFilteringSet] - - -@add_method(otTables.LookupList) -def mapMarkFilteringSets(self, markFilteringSetMap): - for l in self.Lookup: - if not l: - continue - l.mapMarkFilteringSets(markFilteringSetMap) - - -@add_method(otTables.Feature) -def mapLookups(self, lookupMap): - self.LookupListIndex = [lookupMap[i] for i in self.LookupListIndex] - - -@add_method(otTables.FeatureList) -def mapLookups(self, lookupMap): - for f in self.FeatureRecord: - if not f or not f.Feature: - continue - f.Feature.mapLookups(lookupMap) - - -@add_method(otTables.DefaultLangSys, otTables.LangSys) -def mapFeatures(self, featureMap): - self.FeatureIndex = [featureMap[i] for i in self.FeatureIndex] - if self.ReqFeatureIndex != 65535: - self.ReqFeatureIndex = featureMap[self.ReqFeatureIndex] - - -@add_method(otTables.Script) -def mapFeatures(self, featureMap): - if self.DefaultLangSys: - self.DefaultLangSys.mapFeatures(featureMap) - for l in self.LangSysRecord: - if not l or not l.LangSys: - continue - l.LangSys.mapFeatures(featureMap) - - -@add_method(otTables.ScriptList) -def mapFeatures(self, featureMap): - for s in self.ScriptRecord: - if not s or not s.Script: - continue - s.Script.mapFeatures(featureMap) - - -def layoutPreMerge(font): - # Map indices to references - - GDEF = font.get("GDEF") - GSUB = font.get("GSUB") - GPOS = font.get("GPOS") - - for t in [GSUB, GPOS]: - if not t: - continue - - if t.table.LookupList: - lookupMap = {i: v for i, v in enumerate(t.table.LookupList.Lookup)} - t.table.LookupList.mapLookups(lookupMap) - t.table.FeatureList.mapLookups(lookupMap) - - if ( - GDEF - and GDEF.table.Version >= 0x00010002 - and GDEF.table.MarkGlyphSetsDef - ): - markFilteringSetMap = { - i: v for i, v in enumerate(GDEF.table.MarkGlyphSetsDef.Coverage) - } - t.table.LookupList.mapMarkFilteringSets(markFilteringSetMap) - - if t.table.FeatureList and t.table.ScriptList: - featureMap = {i: v for i, v in enumerate(t.table.FeatureList.FeatureRecord)} - t.table.ScriptList.mapFeatures(featureMap) - - # TODO FeatureParams nameIDs - - -def layoutPostMerge(font): - # Map references back to indices - - GDEF = font.get("GDEF") - GSUB = font.get("GSUB") - GPOS = font.get("GPOS") - - for t in [GSUB, GPOS]: - if not t: - continue - - if t.table.FeatureList and t.table.ScriptList: - # Collect unregistered (new) features. - featureMap = GregariousIdentityDict(t.table.FeatureList.FeatureRecord) - t.table.ScriptList.mapFeatures(featureMap) - - # Record used features. - featureMap = AttendanceRecordingIdentityDict( - t.table.FeatureList.FeatureRecord - ) - t.table.ScriptList.mapFeatures(featureMap) - usedIndices = featureMap.s - - # Remove unused features - t.table.FeatureList.FeatureRecord = [ - f - for i, f in enumerate(t.table.FeatureList.FeatureRecord) - if i in usedIndices - ] - - # Map back to indices. - featureMap = NonhashableDict(t.table.FeatureList.FeatureRecord) - t.table.ScriptList.mapFeatures(featureMap) - - t.table.FeatureList.FeatureCount = len(t.table.FeatureList.FeatureRecord) - - if t.table.LookupList: - # Collect unregistered (new) lookups. - lookupMap = GregariousIdentityDict(t.table.LookupList.Lookup) - t.table.FeatureList.mapLookups(lookupMap) - t.table.LookupList.mapLookups(lookupMap) - - # Record used lookups. - lookupMap = AttendanceRecordingIdentityDict(t.table.LookupList.Lookup) - t.table.FeatureList.mapLookups(lookupMap) - t.table.LookupList.mapLookups(lookupMap) - usedIndices = lookupMap.s - - # Remove unused lookups - t.table.LookupList.Lookup = [ - l for i, l in enumerate(t.table.LookupList.Lookup) if i in usedIndices - ] - - # Map back to indices. - lookupMap = NonhashableDict(t.table.LookupList.Lookup) - t.table.FeatureList.mapLookups(lookupMap) - t.table.LookupList.mapLookups(lookupMap) - - t.table.LookupList.LookupCount = len(t.table.LookupList.Lookup) - - if GDEF and GDEF.table.Version >= 0x00010002: - markFilteringSetMap = NonhashableDict( - GDEF.table.MarkGlyphSetsDef.Coverage - ) - t.table.LookupList.mapMarkFilteringSets(markFilteringSetMap) - - # TODO FeatureParams nameIDs diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/otlLib/optimize/__main__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/otlLib/optimize/__main__.py deleted file mode 100644 index b0ae9081ca8dac338bcf085c71adad87805e3bad..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/otlLib/optimize/__main__.py +++ /dev/null @@ -1,6 +0,0 @@ -import sys -from fontTools.otlLib.optimize import main - - -if __name__ == "__main__": - sys.exit(main()) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/BitmapGlyphMetrics.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/BitmapGlyphMetrics.py deleted file mode 100644 index 10b4f828213b8320d54eefed3d5e66f2ba532101..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/tables/BitmapGlyphMetrics.py +++ /dev/null @@ -1,64 +0,0 @@ -# Since bitmap glyph metrics are shared between EBLC and EBDT -# this class gets its own python file. -from fontTools.misc import sstruct -from fontTools.misc.textTools import safeEval -import logging - - -log = logging.getLogger(__name__) - -bigGlyphMetricsFormat = """ - > # big endian - height: B - width: B - horiBearingX: b - horiBearingY: b - horiAdvance: B - vertBearingX: b - vertBearingY: b - vertAdvance: B -""" - -smallGlyphMetricsFormat = """ - > # big endian - height: B - width: B - BearingX: b - BearingY: b - Advance: B -""" - - -class BitmapGlyphMetrics(object): - def toXML(self, writer, ttFont): - writer.begintag(self.__class__.__name__) - writer.newline() - for metricName in sstruct.getformat(self.__class__.binaryFormat)[1]: - writer.simpletag(metricName, value=getattr(self, metricName)) - writer.newline() - writer.endtag(self.__class__.__name__) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - metricNames = set(sstruct.getformat(self.__class__.binaryFormat)[1]) - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - # Make sure this is a metric that is needed by GlyphMetrics. - if name in metricNames: - vars(self)[name] = safeEval(attrs["value"]) - else: - log.warning( - "unknown name '%s' being ignored in %s.", - name, - self.__class__.__name__, - ) - - -class BigGlyphMetrics(BitmapGlyphMetrics): - binaryFormat = bigGlyphMetricsFormat - - -class SmallGlyphMetrics(BitmapGlyphMetrics): - binaryFormat = smallGlyphMetricsFormat diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/files/runtime/svelte-native/proxy-adapter-native.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/files/runtime/svelte-native/proxy-adapter-native.js deleted file mode 100644 index f927072ad5ca3d7e71696bc4152ee1d31c3a7342..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/files/runtime/svelte-native/proxy-adapter-native.js +++ /dev/null @@ -1,341 +0,0 @@ -/* global document */ - -import { adapter as ProxyAdapterDom } from '../proxy-adapter-dom' - -import { patchShowModal, getModalData } from './patch-page-show-modal' - -patchShowModal() - -// Svelte Native support -// ===================== -// -// Rerendering Svelte Native page proves challenging... -// -// In NativeScript, pages are the top level component. They are normally -// introduced into NativeScript's runtime by its `navigate` function. This -// is how Svelte Natives handles it: it renders the Page component to a -// dummy fragment, and "navigate" to the page element thus created. -// -// As long as modifications only impact child components of the page, then -// we can keep the existing page and replace its content for HMR. -// -// However, if the page component itself is modified (including its system -// title bar), things get hairy... -// -// Apparently, the sole way of introducing a new page in a NS application is -// to navigate to it (no way to just replace it in its parent "element", for -// example). This is how it is done in NS's own "core" HMR. -// -// NOTE The last paragraph has not really been confirmed with NS6. -// -// Unfortunately the API they're using to do that is not public... Its various -// parts remain exposed though (but documented as private), so this exploratory -// work now relies on it. It might be fragile... -// -// The problem is that there is no public API that can navigate to a page and -// replace (like location.replace) the current history entry. Actually there -// is an active issue at NS asking for that. Incidentally, members of -// NativeScript-Vue have commented on the issue to weight in for it -- they -// probably face some similar challenge. -// -// https://github.com/NativeScript/NativeScript/issues/6283 - -const getNavTransition = ({ transition }) => { - if (typeof transition === 'string') { - transition = { name: transition } - } - return transition ? { animated: true, transition } : { animated: false } -} - -// copied from TNS FrameBase.replacePage -// -// it is not public but there is a comment in there indicating it is for -// HMR (probably their own core HMR though) -// -// NOTE this "worked" in TNS 5, but not anymore in TNS 6: updated version bellow -// -// eslint-disable-next-line no-unused-vars -const replacePage_tns5 = (frame, newPageElement, hotOptions) => { - const currentBackstackEntry = frame._currentEntry - frame.navigationType = 2 - frame.performNavigation({ - isBackNavigation: false, - entry: { - resolvedPage: newPageElement.nativeView, - // - // entry: currentBackstackEntry.entry, - entry: Object.assign( - currentBackstackEntry.entry, - getNavTransition(hotOptions) - ), - navDepth: currentBackstackEntry.navDepth, - fragmentTag: currentBackstackEntry.fragmentTag, - frameId: currentBackstackEntry.frameId, - }, - }) -} - -// Updated for TNS v6 -// -// https://github.com/NativeScript/NativeScript/blob/6.1.1/tns-core-modules/ui/frame/frame-common.ts#L656 -const replacePage = (frame, newPageElement) => { - const currentBackstackEntry = frame._currentEntry - const newPage = newPageElement.nativeView - const newBackstackEntry = { - entry: currentBackstackEntry.entry, - resolvedPage: newPage, - navDepth: currentBackstackEntry.navDepth, - fragmentTag: currentBackstackEntry.fragmentTag, - frameId: currentBackstackEntry.frameId, - } - const navigationContext = { - entry: newBackstackEntry, - isBackNavigation: false, - navigationType: 2 /* NavigationType replace */, - } - frame._navigationQueue.push(navigationContext) - frame._processNextNavigationEntry() -} - -export const adapter = class ProxyAdapterNative extends ProxyAdapterDom { - constructor(instance) { - super(instance) - - this.nativePageElement = null - this.originalNativeView = null - this.navigatedFromHandler = null - - this.relayNativeNavigatedFrom = this.relayNativeNavigatedFrom.bind(this) - } - - dispose() { - super.dispose() - this.releaseNativePageElement() - } - - releaseNativePageElement() { - if (this.nativePageElement) { - // native cleaning will happen when navigating back from the page - this.nativePageElement = null - } - } - - // svelte-native uses navigateFrom event + e.isBackNavigation to know - // when to $destroy the component -- but we don't want our proxy instance - // destroyed when we renavigate to the same page for navigation purposes! - interceptPageNavigation(pageElement) { - const originalNativeView = pageElement.nativeView - const { on } = originalNativeView - const ownOn = originalNativeView.hasOwnProperty('on') - // tricks svelte-native into giving us its handler - originalNativeView.on = function(type, handler) { - if (type === 'navigatedFrom') { - this.navigatedFromHandler = handler - if (ownOn) { - originalNativeView.on = on - } else { - delete originalNativeView.on - } - } else { - //some other handler wireup, we will just pass it on. - if (on) { - on(type, handler) - } - } - } - } - - afterMount(target, anchor) { - // nativePageElement needs to be updated each time (only for page - // components, native component that are not pages follow normal flow) - // - // TODO quid of components that are initially a page, but then have the - // tag removed while running? or the opposite? - // - // insertionPoint needs to be updated _only when the target changes_ -- - // i.e. when the component is mount, i.e. (in svelte3) when the component - // is _created_, and svelte3 doesn't allow it to move afterward -- that - // is, insertionPoint only needs to be created once when the component is - // first mounted. - // - // TODO is it really true that components' elements cannot move in the - // DOM? what about keyed list? - // - - const isNativePage = - (target.tagName === 'fragment' || target.tagName === 'frame') && - target.firstChild && - target.firstChild.tagName == 'page' - if (isNativePage) { - const nativePageElement = target.firstChild - this.interceptPageNavigation(nativePageElement) - this.nativePageElement = nativePageElement - } else { - // try to protect against components changing from page to no-page - // or vice versa -- see DEBUG 1 above. NOT TESTED so prolly not working - this.nativePageElement = null - super.afterMount(target, anchor) - } - } - - rerender() { - const { nativePageElement } = this - if (nativePageElement) { - this.rerenderNative() - } else { - super.rerender() - } - } - - rerenderNative() { - const { nativePageElement: oldPageElement } = this - const nativeView = oldPageElement.nativeView - const frame = nativeView.frame - if (frame) { - return this.rerenderPage(frame, nativeView) - } - const modalParent = nativeView._modalParent // FIXME private API - if (modalParent) { - return this.rerenderModal(modalParent, nativeView) - } - // wtf? hopefully a race condition with a destroyed component, so - // we have nothing more to do here - // - // for once, it happens when hot reloading dev deps, like this file - // - } - - rerenderPage(frame, previousPageView) { - const isCurrentPage = frame.currentPage === previousPageView - if (isCurrentPage) { - const { - instance: { hotOptions }, - } = this - const newPageElement = this.createPage() - if (!newPageElement) { - throw new Error('Failed to create updated page') - } - const isFirstPage = !frame.canGoBack() - - if (isFirstPage) { - // NOTE not so sure of bellow with the new NS6 method for replace - // - // The "replacePage" strategy does not work on the first page - // of the stack. - // - // Resulting bug: - // - launch - // - change first page => HMR - // - navigate to other page - // - back - // => actual: back to OS - // => expected: back to page 1 - // - // Fortunately, we can overwrite history in this case. - // - const nativeView = newPageElement.nativeView - frame.navigate( - Object.assign( - {}, - { - create: () => nativeView, - clearHistory: true, - }, - getNavTransition(hotOptions) - ) - ) - } else { - replacePage(frame, newPageElement, hotOptions) - } - } else { - const backEntry = frame.backStack.find( - ({ resolvedPage: page }) => page === previousPageView - ) - if (!backEntry) { - // well... looks like we didn't make it to history after all - return - } - // replace existing nativeView - const newPageElement = this.createPage() - if (newPageElement) { - backEntry.resolvedPage = newPageElement.nativeView - } else { - throw new Error('Failed to create updated page') - } - } - } - - // modalParent is the page on which showModal(...) was called - // oldPageElement is the modal content, that we're actually trying to reload - rerenderModal(modalParent, modalView) { - const modalData = getModalData(modalView) - - modalData.closeCallback = () => { - const nativePageElement = this.createPage() - if (!nativePageElement) { - throw new Error('Failed to created updated modal page') - } - const { nativeView } = nativePageElement - const { originalOptions } = modalData - // Options will get monkey patched again, the only work left for us - // is to try to reduce visual disturbances. - // - // FIXME Even that proves too much unfortunately... Apparently TNS - // does not respect the `animated` option in this context: - // https://docs.nativescript.org/api-reference/interfaces/_ui_core_view_base_.showmodaloptions#animated - // - const options = Object.assign({}, originalOptions, { animated: false }) - modalParent.showModal(nativeView, options) - } - - modalView.closeModal() - } - - createPage() { - const { - instance: { refreshComponent }, - } = this - const { nativePageElement, relayNativeNavigatedFrom } = this - const oldNativeView = nativePageElement.nativeView - // rerender - const target = document.createElement('fragment') - // not using conservative for now, since there's nothing in place here to - // leverage it (yet?) -- and it might be easier to miss breakages in native - // only code paths - refreshComponent(target, null) - // this.nativePageElement is updated in afterMount, triggered by proxy / hooks - const newPageElement = this.nativePageElement - // update event proxy - oldNativeView.off('navigatedFrom', relayNativeNavigatedFrom) - nativePageElement.nativeView.on('navigatedFrom', relayNativeNavigatedFrom) - return newPageElement - } - - relayNativeNavigatedFrom({ isBackNavigation }) { - const { originalNativeView, navigatedFromHandler } = this - if (!isBackNavigation) { - return - } - if (originalNativeView) { - const { off } = originalNativeView - const ownOff = originalNativeView.hasOwnProperty('off') - originalNativeView.off = function() { - this.navigatedFromHandler = null - if (ownOff) { - originalNativeView.off = off - } else { - delete originalNativeView.off - } - } - } - if (navigatedFromHandler) { - return navigatedFromHandler.apply(this, arguments) - } - } - - renderError(err /* , target, anchor */) { - // TODO fallback on TNS error handler for now... at least our error - // is more informative - throw err - } -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/node_modules/esbuild-wasm/lib/browser.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/node_modules/esbuild-wasm/lib/browser.js deleted file mode 100644 index c8069ff7bad843d64f410c7a12b23396052ef0db..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/node/dev/node_modules/esbuild-wasm/lib/browser.js +++ /dev/null @@ -1,2549 +0,0 @@ -(module=>{ -"use strict"; -var __defProp = Object.defineProperty; -var __getOwnPropDesc = Object.getOwnPropertyDescriptor; -var __getOwnPropNames = Object.getOwnPropertyNames; -var __hasOwnProp = Object.prototype.hasOwnProperty; -var __export = (target, all) => { - for (var name in all) - __defProp(target, name, { get: all[name], enumerable: true }); -}; -var __copyProps = (to, from, except, desc) => { - if (from && typeof from === "object" || typeof from === "function") { - for (let key of __getOwnPropNames(from)) - if (!__hasOwnProp.call(to, key) && key !== except) - __defProp(to, key, { get: () => from[key], enumerable: !(desc = __getOwnPropDesc(from, key)) || desc.enumerable }); - } - return to; -}; -var __toCommonJS = (mod) => __copyProps(__defProp({}, "__esModule", { value: true }), mod); -var __async = (__this, __arguments, generator) => { - return new Promise((resolve, reject) => { - var fulfilled = (value) => { - try { - step(generator.next(value)); - } catch (e) { - reject(e); - } - }; - var rejected = (value) => { - try { - step(generator.throw(value)); - } catch (e) { - reject(e); - } - }; - var step = (x) => x.done ? resolve(x.value) : Promise.resolve(x.value).then(fulfilled, rejected); - step((generator = generator.apply(__this, __arguments)).next()); - }); -}; - -// lib/npm/browser.ts -var browser_exports = {}; -__export(browser_exports, { - analyzeMetafile: () => analyzeMetafile, - analyzeMetafileSync: () => analyzeMetafileSync, - build: () => build, - buildSync: () => buildSync, - context: () => context, - default: () => browser_default, - formatMessages: () => formatMessages, - formatMessagesSync: () => formatMessagesSync, - initialize: () => initialize, - transform: () => transform, - transformSync: () => transformSync, - version: () => version -}); -module.exports = __toCommonJS(browser_exports); - -// lib/shared/stdio_protocol.ts -function encodePacket(packet) { - let visit = (value) => { - if (value === null) { - bb.write8(0); - } else if (typeof value === "boolean") { - bb.write8(1); - bb.write8(+value); - } else if (typeof value === "number") { - bb.write8(2); - bb.write32(value | 0); - } else if (typeof value === "string") { - bb.write8(3); - bb.write(encodeUTF8(value)); - } else if (value instanceof Uint8Array) { - bb.write8(4); - bb.write(value); - } else if (value instanceof Array) { - bb.write8(5); - bb.write32(value.length); - for (let item of value) { - visit(item); - } - } else { - let keys = Object.keys(value); - bb.write8(6); - bb.write32(keys.length); - for (let key of keys) { - bb.write(encodeUTF8(key)); - visit(value[key]); - } - } - }; - let bb = new ByteBuffer(); - bb.write32(0); - bb.write32(packet.id << 1 | +!packet.isRequest); - visit(packet.value); - writeUInt32LE(bb.buf, bb.len - 4, 0); - return bb.buf.subarray(0, bb.len); -} -function decodePacket(bytes) { - let visit = () => { - switch (bb.read8()) { - case 0: - return null; - case 1: - return !!bb.read8(); - case 2: - return bb.read32(); - case 3: - return decodeUTF8(bb.read()); - case 4: - return bb.read(); - case 5: { - let count = bb.read32(); - let value2 = []; - for (let i = 0; i < count; i++) { - value2.push(visit()); - } - return value2; - } - case 6: { - let count = bb.read32(); - let value2 = {}; - for (let i = 0; i < count; i++) { - value2[decodeUTF8(bb.read())] = visit(); - } - return value2; - } - default: - throw new Error("Invalid packet"); - } - }; - let bb = new ByteBuffer(bytes); - let id = bb.read32(); - let isRequest = (id & 1) === 0; - id >>>= 1; - let value = visit(); - if (bb.ptr !== bytes.length) { - throw new Error("Invalid packet"); - } - return { id, isRequest, value }; -} -var ByteBuffer = class { - constructor(buf = new Uint8Array(1024)) { - this.buf = buf; - this.len = 0; - this.ptr = 0; - } - _write(delta) { - if (this.len + delta > this.buf.length) { - let clone = new Uint8Array((this.len + delta) * 2); - clone.set(this.buf); - this.buf = clone; - } - this.len += delta; - return this.len - delta; - } - write8(value) { - let offset = this._write(1); - this.buf[offset] = value; - } - write32(value) { - let offset = this._write(4); - writeUInt32LE(this.buf, value, offset); - } - write(bytes) { - let offset = this._write(4 + bytes.length); - writeUInt32LE(this.buf, bytes.length, offset); - this.buf.set(bytes, offset + 4); - } - _read(delta) { - if (this.ptr + delta > this.buf.length) { - throw new Error("Invalid packet"); - } - this.ptr += delta; - return this.ptr - delta; - } - read8() { - return this.buf[this._read(1)]; - } - read32() { - return readUInt32LE(this.buf, this._read(4)); - } - read() { - let length = this.read32(); - let bytes = new Uint8Array(length); - let ptr = this._read(bytes.length); - bytes.set(this.buf.subarray(ptr, ptr + length)); - return bytes; - } -}; -var encodeUTF8; -var decodeUTF8; -var encodeInvariant; -if (typeof TextEncoder !== "undefined" && typeof TextDecoder !== "undefined") { - let encoder = new TextEncoder(); - let decoder = new TextDecoder(); - encodeUTF8 = (text) => encoder.encode(text); - decodeUTF8 = (bytes) => decoder.decode(bytes); - encodeInvariant = 'new TextEncoder().encode("")'; -} else if (typeof Buffer !== "undefined") { - encodeUTF8 = (text) => Buffer.from(text); - decodeUTF8 = (bytes) => { - let { buffer, byteOffset, byteLength } = bytes; - return Buffer.from(buffer, byteOffset, byteLength).toString(); - }; - encodeInvariant = 'Buffer.from("")'; -} else { - throw new Error("No UTF-8 codec found"); -} -if (!(encodeUTF8("") instanceof Uint8Array)) - throw new Error(`Invariant violation: "${encodeInvariant} instanceof Uint8Array" is incorrectly false - -This indicates that your JavaScript environment is broken. You cannot use -esbuild in this environment because esbuild relies on this invariant. This -is not a problem with esbuild. You need to fix your environment instead. -`); -function readUInt32LE(buffer, offset) { - return buffer[offset++] | buffer[offset++] << 8 | buffer[offset++] << 16 | buffer[offset++] << 24; -} -function writeUInt32LE(buffer, value, offset) { - buffer[offset++] = value; - buffer[offset++] = value >> 8; - buffer[offset++] = value >> 16; - buffer[offset++] = value >> 24; -} - -// lib/shared/common.ts -var quote = JSON.stringify; -var buildLogLevelDefault = "warning"; -var transformLogLevelDefault = "silent"; -function validateTarget(target) { - validateStringValue(target, "target"); - if (target.indexOf(",") >= 0) - throw new Error(`Invalid target: ${target}`); - return target; -} -var canBeAnything = () => null; -var mustBeBoolean = (value) => typeof value === "boolean" ? null : "a boolean"; -var mustBeString = (value) => typeof value === "string" ? null : "a string"; -var mustBeRegExp = (value) => value instanceof RegExp ? null : "a RegExp object"; -var mustBeInteger = (value) => typeof value === "number" && value === (value | 0) ? null : "an integer"; -var mustBeFunction = (value) => typeof value === "function" ? null : "a function"; -var mustBeArray = (value) => Array.isArray(value) ? null : "an array"; -var mustBeObject = (value) => typeof value === "object" && value !== null && !Array.isArray(value) ? null : "an object"; -var mustBeEntryPoints = (value) => typeof value === "object" && value !== null ? null : "an array or an object"; -var mustBeWebAssemblyModule = (value) => value instanceof WebAssembly.Module ? null : "a WebAssembly.Module"; -var mustBeObjectOrNull = (value) => typeof value === "object" && !Array.isArray(value) ? null : "an object or null"; -var mustBeStringOrBoolean = (value) => typeof value === "string" || typeof value === "boolean" ? null : "a string or a boolean"; -var mustBeStringOrObject = (value) => typeof value === "string" || typeof value === "object" && value !== null && !Array.isArray(value) ? null : "a string or an object"; -var mustBeStringOrArray = (value) => typeof value === "string" || Array.isArray(value) ? null : "a string or an array"; -var mustBeStringOrUint8Array = (value) => typeof value === "string" || value instanceof Uint8Array ? null : "a string or a Uint8Array"; -var mustBeStringOrURL = (value) => typeof value === "string" || value instanceof URL ? null : "a string or a URL"; -function getFlag(object, keys, key, mustBeFn) { - let value = object[key]; - keys[key + ""] = true; - if (value === void 0) - return void 0; - let mustBe = mustBeFn(value); - if (mustBe !== null) - throw new Error(`${quote(key)} must be ${mustBe}`); - return value; -} -function checkForInvalidFlags(object, keys, where) { - for (let key in object) { - if (!(key in keys)) { - throw new Error(`Invalid option ${where}: ${quote(key)}`); - } - } -} -function validateInitializeOptions(options) { - let keys = /* @__PURE__ */ Object.create(null); - let wasmURL = getFlag(options, keys, "wasmURL", mustBeStringOrURL); - let wasmModule = getFlag(options, keys, "wasmModule", mustBeWebAssemblyModule); - let worker = getFlag(options, keys, "worker", mustBeBoolean); - checkForInvalidFlags(options, keys, "in initialize() call"); - return { - wasmURL, - wasmModule, - worker - }; -} -function validateMangleCache(mangleCache) { - let validated; - if (mangleCache !== void 0) { - validated = /* @__PURE__ */ Object.create(null); - for (let key in mangleCache) { - let value = mangleCache[key]; - if (typeof value === "string" || value === false) { - validated[key] = value; - } else { - throw new Error(`Expected ${quote(key)} in mangle cache to map to either a string or false`); - } - } - } - return validated; -} -function pushLogFlags(flags, options, keys, isTTY, logLevelDefault) { - let color = getFlag(options, keys, "color", mustBeBoolean); - let logLevel = getFlag(options, keys, "logLevel", mustBeString); - let logLimit = getFlag(options, keys, "logLimit", mustBeInteger); - if (color !== void 0) - flags.push(`--color=${color}`); - else if (isTTY) - flags.push(`--color=true`); - flags.push(`--log-level=${logLevel || logLevelDefault}`); - flags.push(`--log-limit=${logLimit || 0}`); -} -function validateStringValue(value, what, key) { - if (typeof value !== "string") { - throw new Error(`Expected value for ${what}${key !== void 0 ? " " + quote(key) : ""} to be a string, got ${typeof value} instead`); - } - return value; -} -function pushCommonFlags(flags, options, keys) { - let legalComments = getFlag(options, keys, "legalComments", mustBeString); - let sourceRoot = getFlag(options, keys, "sourceRoot", mustBeString); - let sourcesContent = getFlag(options, keys, "sourcesContent", mustBeBoolean); - let target = getFlag(options, keys, "target", mustBeStringOrArray); - let format = getFlag(options, keys, "format", mustBeString); - let globalName = getFlag(options, keys, "globalName", mustBeString); - let mangleProps = getFlag(options, keys, "mangleProps", mustBeRegExp); - let reserveProps = getFlag(options, keys, "reserveProps", mustBeRegExp); - let mangleQuoted = getFlag(options, keys, "mangleQuoted", mustBeBoolean); - let minify = getFlag(options, keys, "minify", mustBeBoolean); - let minifySyntax = getFlag(options, keys, "minifySyntax", mustBeBoolean); - let minifyWhitespace = getFlag(options, keys, "minifyWhitespace", mustBeBoolean); - let minifyIdentifiers = getFlag(options, keys, "minifyIdentifiers", mustBeBoolean); - let lineLimit = getFlag(options, keys, "lineLimit", mustBeInteger); - let drop = getFlag(options, keys, "drop", mustBeArray); - let dropLabels = getFlag(options, keys, "dropLabels", mustBeArray); - let charset = getFlag(options, keys, "charset", mustBeString); - let treeShaking = getFlag(options, keys, "treeShaking", mustBeBoolean); - let ignoreAnnotations = getFlag(options, keys, "ignoreAnnotations", mustBeBoolean); - let jsx = getFlag(options, keys, "jsx", mustBeString); - let jsxFactory = getFlag(options, keys, "jsxFactory", mustBeString); - let jsxFragment = getFlag(options, keys, "jsxFragment", mustBeString); - let jsxImportSource = getFlag(options, keys, "jsxImportSource", mustBeString); - let jsxDev = getFlag(options, keys, "jsxDev", mustBeBoolean); - let jsxSideEffects = getFlag(options, keys, "jsxSideEffects", mustBeBoolean); - let define = getFlag(options, keys, "define", mustBeObject); - let logOverride = getFlag(options, keys, "logOverride", mustBeObject); - let supported = getFlag(options, keys, "supported", mustBeObject); - let pure = getFlag(options, keys, "pure", mustBeArray); - let keepNames = getFlag(options, keys, "keepNames", mustBeBoolean); - let platform = getFlag(options, keys, "platform", mustBeString); - let tsconfigRaw = getFlag(options, keys, "tsconfigRaw", mustBeStringOrObject); - if (legalComments) - flags.push(`--legal-comments=${legalComments}`); - if (sourceRoot !== void 0) - flags.push(`--source-root=${sourceRoot}`); - if (sourcesContent !== void 0) - flags.push(`--sources-content=${sourcesContent}`); - if (target) { - if (Array.isArray(target)) - flags.push(`--target=${Array.from(target).map(validateTarget).join(",")}`); - else - flags.push(`--target=${validateTarget(target)}`); - } - if (format) - flags.push(`--format=${format}`); - if (globalName) - flags.push(`--global-name=${globalName}`); - if (platform) - flags.push(`--platform=${platform}`); - if (tsconfigRaw) - flags.push(`--tsconfig-raw=${typeof tsconfigRaw === "string" ? tsconfigRaw : JSON.stringify(tsconfigRaw)}`); - if (minify) - flags.push("--minify"); - if (minifySyntax) - flags.push("--minify-syntax"); - if (minifyWhitespace) - flags.push("--minify-whitespace"); - if (minifyIdentifiers) - flags.push("--minify-identifiers"); - if (lineLimit) - flags.push(`--line-limit=${lineLimit}`); - if (charset) - flags.push(`--charset=${charset}`); - if (treeShaking !== void 0) - flags.push(`--tree-shaking=${treeShaking}`); - if (ignoreAnnotations) - flags.push(`--ignore-annotations`); - if (drop) - for (let what of drop) - flags.push(`--drop:${validateStringValue(what, "drop")}`); - if (dropLabels) - flags.push(`--drop-labels=${Array.from(dropLabels).map((what) => validateStringValue(what, "dropLabels")).join(",")}`); - if (mangleProps) - flags.push(`--mangle-props=${mangleProps.source}`); - if (reserveProps) - flags.push(`--reserve-props=${reserveProps.source}`); - if (mangleQuoted !== void 0) - flags.push(`--mangle-quoted=${mangleQuoted}`); - if (jsx) - flags.push(`--jsx=${jsx}`); - if (jsxFactory) - flags.push(`--jsx-factory=${jsxFactory}`); - if (jsxFragment) - flags.push(`--jsx-fragment=${jsxFragment}`); - if (jsxImportSource) - flags.push(`--jsx-import-source=${jsxImportSource}`); - if (jsxDev) - flags.push(`--jsx-dev`); - if (jsxSideEffects) - flags.push(`--jsx-side-effects`); - if (define) { - for (let key in define) { - if (key.indexOf("=") >= 0) - throw new Error(`Invalid define: ${key}`); - flags.push(`--define:${key}=${validateStringValue(define[key], "define", key)}`); - } - } - if (logOverride) { - for (let key in logOverride) { - if (key.indexOf("=") >= 0) - throw new Error(`Invalid log override: ${key}`); - flags.push(`--log-override:${key}=${validateStringValue(logOverride[key], "log override", key)}`); - } - } - if (supported) { - for (let key in supported) { - if (key.indexOf("=") >= 0) - throw new Error(`Invalid supported: ${key}`); - const value = supported[key]; - if (typeof value !== "boolean") - throw new Error(`Expected value for supported ${quote(key)} to be a boolean, got ${typeof value} instead`); - flags.push(`--supported:${key}=${value}`); - } - } - if (pure) - for (let fn of pure) - flags.push(`--pure:${validateStringValue(fn, "pure")}`); - if (keepNames) - flags.push(`--keep-names`); -} -function flagsForBuildOptions(callName, options, isTTY, logLevelDefault, writeDefault) { - var _a; - let flags = []; - let entries = []; - let keys = /* @__PURE__ */ Object.create(null); - let stdinContents = null; - let stdinResolveDir = null; - pushLogFlags(flags, options, keys, isTTY, logLevelDefault); - pushCommonFlags(flags, options, keys); - let sourcemap = getFlag(options, keys, "sourcemap", mustBeStringOrBoolean); - let bundle = getFlag(options, keys, "bundle", mustBeBoolean); - let splitting = getFlag(options, keys, "splitting", mustBeBoolean); - let preserveSymlinks = getFlag(options, keys, "preserveSymlinks", mustBeBoolean); - let metafile = getFlag(options, keys, "metafile", mustBeBoolean); - let outfile = getFlag(options, keys, "outfile", mustBeString); - let outdir = getFlag(options, keys, "outdir", mustBeString); - let outbase = getFlag(options, keys, "outbase", mustBeString); - let tsconfig = getFlag(options, keys, "tsconfig", mustBeString); - let resolveExtensions = getFlag(options, keys, "resolveExtensions", mustBeArray); - let nodePathsInput = getFlag(options, keys, "nodePaths", mustBeArray); - let mainFields = getFlag(options, keys, "mainFields", mustBeArray); - let conditions = getFlag(options, keys, "conditions", mustBeArray); - let external = getFlag(options, keys, "external", mustBeArray); - let packages = getFlag(options, keys, "packages", mustBeString); - let alias = getFlag(options, keys, "alias", mustBeObject); - let loader = getFlag(options, keys, "loader", mustBeObject); - let outExtension = getFlag(options, keys, "outExtension", mustBeObject); - let publicPath = getFlag(options, keys, "publicPath", mustBeString); - let entryNames = getFlag(options, keys, "entryNames", mustBeString); - let chunkNames = getFlag(options, keys, "chunkNames", mustBeString); - let assetNames = getFlag(options, keys, "assetNames", mustBeString); - let inject = getFlag(options, keys, "inject", mustBeArray); - let banner = getFlag(options, keys, "banner", mustBeObject); - let footer = getFlag(options, keys, "footer", mustBeObject); - let entryPoints = getFlag(options, keys, "entryPoints", mustBeEntryPoints); - let absWorkingDir = getFlag(options, keys, "absWorkingDir", mustBeString); - let stdin = getFlag(options, keys, "stdin", mustBeObject); - let write = (_a = getFlag(options, keys, "write", mustBeBoolean)) != null ? _a : writeDefault; - let allowOverwrite = getFlag(options, keys, "allowOverwrite", mustBeBoolean); - let mangleCache = getFlag(options, keys, "mangleCache", mustBeObject); - keys.plugins = true; - checkForInvalidFlags(options, keys, `in ${callName}() call`); - if (sourcemap) - flags.push(`--sourcemap${sourcemap === true ? "" : `=${sourcemap}`}`); - if (bundle) - flags.push("--bundle"); - if (allowOverwrite) - flags.push("--allow-overwrite"); - if (splitting) - flags.push("--splitting"); - if (preserveSymlinks) - flags.push("--preserve-symlinks"); - if (metafile) - flags.push(`--metafile`); - if (outfile) - flags.push(`--outfile=${outfile}`); - if (outdir) - flags.push(`--outdir=${outdir}`); - if (outbase) - flags.push(`--outbase=${outbase}`); - if (tsconfig) - flags.push(`--tsconfig=${tsconfig}`); - if (packages) - flags.push(`--packages=${packages}`); - if (resolveExtensions) { - let values = []; - for (let value of resolveExtensions) { - validateStringValue(value, "resolve extension"); - if (value.indexOf(",") >= 0) - throw new Error(`Invalid resolve extension: ${value}`); - values.push(value); - } - flags.push(`--resolve-extensions=${values.join(",")}`); - } - if (publicPath) - flags.push(`--public-path=${publicPath}`); - if (entryNames) - flags.push(`--entry-names=${entryNames}`); - if (chunkNames) - flags.push(`--chunk-names=${chunkNames}`); - if (assetNames) - flags.push(`--asset-names=${assetNames}`); - if (mainFields) { - let values = []; - for (let value of mainFields) { - validateStringValue(value, "main field"); - if (value.indexOf(",") >= 0) - throw new Error(`Invalid main field: ${value}`); - values.push(value); - } - flags.push(`--main-fields=${values.join(",")}`); - } - if (conditions) { - let values = []; - for (let value of conditions) { - validateStringValue(value, "condition"); - if (value.indexOf(",") >= 0) - throw new Error(`Invalid condition: ${value}`); - values.push(value); - } - flags.push(`--conditions=${values.join(",")}`); - } - if (external) - for (let name of external) - flags.push(`--external:${validateStringValue(name, "external")}`); - if (alias) { - for (let old in alias) { - if (old.indexOf("=") >= 0) - throw new Error(`Invalid package name in alias: ${old}`); - flags.push(`--alias:${old}=${validateStringValue(alias[old], "alias", old)}`); - } - } - if (banner) { - for (let type in banner) { - if (type.indexOf("=") >= 0) - throw new Error(`Invalid banner file type: ${type}`); - flags.push(`--banner:${type}=${validateStringValue(banner[type], "banner", type)}`); - } - } - if (footer) { - for (let type in footer) { - if (type.indexOf("=") >= 0) - throw new Error(`Invalid footer file type: ${type}`); - flags.push(`--footer:${type}=${validateStringValue(footer[type], "footer", type)}`); - } - } - if (inject) - for (let path of inject) - flags.push(`--inject:${validateStringValue(path, "inject")}`); - if (loader) { - for (let ext in loader) { - if (ext.indexOf("=") >= 0) - throw new Error(`Invalid loader extension: ${ext}`); - flags.push(`--loader:${ext}=${validateStringValue(loader[ext], "loader", ext)}`); - } - } - if (outExtension) { - for (let ext in outExtension) { - if (ext.indexOf("=") >= 0) - throw new Error(`Invalid out extension: ${ext}`); - flags.push(`--out-extension:${ext}=${validateStringValue(outExtension[ext], "out extension", ext)}`); - } - } - if (entryPoints) { - if (Array.isArray(entryPoints)) { - for (let i = 0, n = entryPoints.length; i < n; i++) { - let entryPoint = entryPoints[i]; - if (typeof entryPoint === "object" && entryPoint !== null) { - let entryPointKeys = /* @__PURE__ */ Object.create(null); - let input = getFlag(entryPoint, entryPointKeys, "in", mustBeString); - let output = getFlag(entryPoint, entryPointKeys, "out", mustBeString); - checkForInvalidFlags(entryPoint, entryPointKeys, "in entry point at index " + i); - if (input === void 0) - throw new Error('Missing property "in" for entry point at index ' + i); - if (output === void 0) - throw new Error('Missing property "out" for entry point at index ' + i); - entries.push([output, input]); - } else { - entries.push(["", validateStringValue(entryPoint, "entry point at index " + i)]); - } - } - } else { - for (let key in entryPoints) { - entries.push([key, validateStringValue(entryPoints[key], "entry point", key)]); - } - } - } - if (stdin) { - let stdinKeys = /* @__PURE__ */ Object.create(null); - let contents = getFlag(stdin, stdinKeys, "contents", mustBeStringOrUint8Array); - let resolveDir = getFlag(stdin, stdinKeys, "resolveDir", mustBeString); - let sourcefile = getFlag(stdin, stdinKeys, "sourcefile", mustBeString); - let loader2 = getFlag(stdin, stdinKeys, "loader", mustBeString); - checkForInvalidFlags(stdin, stdinKeys, 'in "stdin" object'); - if (sourcefile) - flags.push(`--sourcefile=${sourcefile}`); - if (loader2) - flags.push(`--loader=${loader2}`); - if (resolveDir) - stdinResolveDir = resolveDir; - if (typeof contents === "string") - stdinContents = encodeUTF8(contents); - else if (contents instanceof Uint8Array) - stdinContents = contents; - } - let nodePaths = []; - if (nodePathsInput) { - for (let value of nodePathsInput) { - value += ""; - nodePaths.push(value); - } - } - return { - entries, - flags, - write, - stdinContents, - stdinResolveDir, - absWorkingDir, - nodePaths, - mangleCache: validateMangleCache(mangleCache) - }; -} -function flagsForTransformOptions(callName, options, isTTY, logLevelDefault) { - let flags = []; - let keys = /* @__PURE__ */ Object.create(null); - pushLogFlags(flags, options, keys, isTTY, logLevelDefault); - pushCommonFlags(flags, options, keys); - let sourcemap = getFlag(options, keys, "sourcemap", mustBeStringOrBoolean); - let sourcefile = getFlag(options, keys, "sourcefile", mustBeString); - let loader = getFlag(options, keys, "loader", mustBeString); - let banner = getFlag(options, keys, "banner", mustBeString); - let footer = getFlag(options, keys, "footer", mustBeString); - let mangleCache = getFlag(options, keys, "mangleCache", mustBeObject); - checkForInvalidFlags(options, keys, `in ${callName}() call`); - if (sourcemap) - flags.push(`--sourcemap=${sourcemap === true ? "external" : sourcemap}`); - if (sourcefile) - flags.push(`--sourcefile=${sourcefile}`); - if (loader) - flags.push(`--loader=${loader}`); - if (banner) - flags.push(`--banner=${banner}`); - if (footer) - flags.push(`--footer=${footer}`); - return { - flags, - mangleCache: validateMangleCache(mangleCache) - }; -} -function createChannel(streamIn) { - const requestCallbacksByKey = {}; - const closeData = { didClose: false, reason: "" }; - let responseCallbacks = {}; - let nextRequestID = 0; - let nextBuildKey = 0; - let stdout = new Uint8Array(16 * 1024); - let stdoutUsed = 0; - let readFromStdout = (chunk) => { - let limit = stdoutUsed + chunk.length; - if (limit > stdout.length) { - let swap = new Uint8Array(limit * 2); - swap.set(stdout); - stdout = swap; - } - stdout.set(chunk, stdoutUsed); - stdoutUsed += chunk.length; - let offset = 0; - while (offset + 4 <= stdoutUsed) { - let length = readUInt32LE(stdout, offset); - if (offset + 4 + length > stdoutUsed) { - break; - } - offset += 4; - handleIncomingPacket(stdout.subarray(offset, offset + length)); - offset += length; - } - if (offset > 0) { - stdout.copyWithin(0, offset, stdoutUsed); - stdoutUsed -= offset; - } - }; - let afterClose = (error) => { - closeData.didClose = true; - if (error) - closeData.reason = ": " + (error.message || error); - const text = "The service was stopped" + closeData.reason; - for (let id in responseCallbacks) { - responseCallbacks[id](text, null); - } - responseCallbacks = {}; - }; - let sendRequest = (refs, value, callback) => { - if (closeData.didClose) - return callback("The service is no longer running" + closeData.reason, null); - let id = nextRequestID++; - responseCallbacks[id] = (error, response) => { - try { - callback(error, response); - } finally { - if (refs) - refs.unref(); - } - }; - if (refs) - refs.ref(); - streamIn.writeToStdin(encodePacket({ id, isRequest: true, value })); - }; - let sendResponse = (id, value) => { - if (closeData.didClose) - throw new Error("The service is no longer running" + closeData.reason); - streamIn.writeToStdin(encodePacket({ id, isRequest: false, value })); - }; - let handleRequest = (id, request) => __async(this, null, function* () { - try { - if (request.command === "ping") { - sendResponse(id, {}); - return; - } - if (typeof request.key === "number") { - const requestCallbacks = requestCallbacksByKey[request.key]; - if (requestCallbacks) { - const callback = requestCallbacks[request.command]; - if (callback) { - yield callback(id, request); - return; - } - } - } - throw new Error(`Invalid command: ` + request.command); - } catch (e) { - const errors = [extractErrorMessageV8(e, streamIn, null, void 0, "")]; - try { - sendResponse(id, { errors }); - } catch (e2) { - } - } - }); - let isFirstPacket = true; - let handleIncomingPacket = (bytes) => { - if (isFirstPacket) { - isFirstPacket = false; - let binaryVersion = String.fromCharCode(...bytes); - if (binaryVersion !== "0.19.0") { - throw new Error(`Cannot start service: Host version "${"0.19.0"}" does not match binary version ${quote(binaryVersion)}`); - } - return; - } - let packet = decodePacket(bytes); - if (packet.isRequest) { - handleRequest(packet.id, packet.value); - } else { - let callback = responseCallbacks[packet.id]; - delete responseCallbacks[packet.id]; - if (packet.value.error) - callback(packet.value.error, {}); - else - callback(null, packet.value); - } - }; - let buildOrContext = ({ callName, refs, options, isTTY, defaultWD, callback }) => { - let refCount = 0; - const buildKey = nextBuildKey++; - const requestCallbacks = {}; - const buildRefs = { - ref() { - if (++refCount === 1) { - if (refs) - refs.ref(); - } - }, - unref() { - if (--refCount === 0) { - delete requestCallbacksByKey[buildKey]; - if (refs) - refs.unref(); - } - } - }; - requestCallbacksByKey[buildKey] = requestCallbacks; - buildRefs.ref(); - buildOrContextImpl( - callName, - buildKey, - sendRequest, - sendResponse, - buildRefs, - streamIn, - requestCallbacks, - options, - isTTY, - defaultWD, - (err, res) => { - try { - callback(err, res); - } finally { - buildRefs.unref(); - } - } - ); - }; - let transform2 = ({ callName, refs, input, options, isTTY, fs, callback }) => { - const details = createObjectStash(); - let start = (inputPath) => { - try { - if (typeof input !== "string" && !(input instanceof Uint8Array)) - throw new Error('The input to "transform" must be a string or a Uint8Array'); - let { - flags, - mangleCache - } = flagsForTransformOptions(callName, options, isTTY, transformLogLevelDefault); - let request = { - command: "transform", - flags, - inputFS: inputPath !== null, - input: inputPath !== null ? encodeUTF8(inputPath) : typeof input === "string" ? encodeUTF8(input) : input - }; - if (mangleCache) - request.mangleCache = mangleCache; - sendRequest(refs, request, (error, response) => { - if (error) - return callback(new Error(error), null); - let errors = replaceDetailsInMessages(response.errors, details); - let warnings = replaceDetailsInMessages(response.warnings, details); - let outstanding = 1; - let next = () => { - if (--outstanding === 0) { - let result = { - warnings, - code: response.code, - map: response.map, - mangleCache: void 0, - legalComments: void 0 - }; - if ("legalComments" in response) - result.legalComments = response == null ? void 0 : response.legalComments; - if (response.mangleCache) - result.mangleCache = response == null ? void 0 : response.mangleCache; - callback(null, result); - } - }; - if (errors.length > 0) - return callback(failureErrorWithLog("Transform failed", errors, warnings), null); - if (response.codeFS) { - outstanding++; - fs.readFile(response.code, (err, contents) => { - if (err !== null) { - callback(err, null); - } else { - response.code = contents; - next(); - } - }); - } - if (response.mapFS) { - outstanding++; - fs.readFile(response.map, (err, contents) => { - if (err !== null) { - callback(err, null); - } else { - response.map = contents; - next(); - } - }); - } - next(); - }); - } catch (e) { - let flags = []; - try { - pushLogFlags(flags, options, {}, isTTY, transformLogLevelDefault); - } catch (e2) { - } - const error = extractErrorMessageV8(e, streamIn, details, void 0, ""); - sendRequest(refs, { command: "error", flags, error }, () => { - error.detail = details.load(error.detail); - callback(failureErrorWithLog("Transform failed", [error], []), null); - }); - } - }; - if ((typeof input === "string" || input instanceof Uint8Array) && input.length > 1024 * 1024) { - let next = start; - start = () => fs.writeFile(input, next); - } - start(null); - }; - let formatMessages2 = ({ callName, refs, messages, options, callback }) => { - let result = sanitizeMessages(messages, "messages", null, ""); - if (!options) - throw new Error(`Missing second argument in ${callName}() call`); - let keys = {}; - let kind = getFlag(options, keys, "kind", mustBeString); - let color = getFlag(options, keys, "color", mustBeBoolean); - let terminalWidth = getFlag(options, keys, "terminalWidth", mustBeInteger); - checkForInvalidFlags(options, keys, `in ${callName}() call`); - if (kind === void 0) - throw new Error(`Missing "kind" in ${callName}() call`); - if (kind !== "error" && kind !== "warning") - throw new Error(`Expected "kind" to be "error" or "warning" in ${callName}() call`); - let request = { - command: "format-msgs", - messages: result, - isWarning: kind === "warning" - }; - if (color !== void 0) - request.color = color; - if (terminalWidth !== void 0) - request.terminalWidth = terminalWidth; - sendRequest(refs, request, (error, response) => { - if (error) - return callback(new Error(error), null); - callback(null, response.messages); - }); - }; - let analyzeMetafile2 = ({ callName, refs, metafile, options, callback }) => { - if (options === void 0) - options = {}; - let keys = {}; - let color = getFlag(options, keys, "color", mustBeBoolean); - let verbose = getFlag(options, keys, "verbose", mustBeBoolean); - checkForInvalidFlags(options, keys, `in ${callName}() call`); - let request = { - command: "analyze-metafile", - metafile - }; - if (color !== void 0) - request.color = color; - if (verbose !== void 0) - request.verbose = verbose; - sendRequest(refs, request, (error, response) => { - if (error) - return callback(new Error(error), null); - callback(null, response.result); - }); - }; - return { - readFromStdout, - afterClose, - service: { - buildOrContext, - transform: transform2, - formatMessages: formatMessages2, - analyzeMetafile: analyzeMetafile2 - } - }; -} -function buildOrContextImpl(callName, buildKey, sendRequest, sendResponse, refs, streamIn, requestCallbacks, options, isTTY, defaultWD, callback) { - const details = createObjectStash(); - const isContext = callName === "context"; - const handleError = (e, pluginName) => { - const flags = []; - try { - pushLogFlags(flags, options, {}, isTTY, buildLogLevelDefault); - } catch (e2) { - } - const message = extractErrorMessageV8(e, streamIn, details, void 0, pluginName); - sendRequest(refs, { command: "error", flags, error: message }, () => { - message.detail = details.load(message.detail); - callback(failureErrorWithLog(isContext ? "Context failed" : "Build failed", [message], []), null); - }); - }; - let plugins; - if (typeof options === "object") { - const value = options.plugins; - if (value !== void 0) { - if (!Array.isArray(value)) - return handleError(new Error(`"plugins" must be an array`), ""); - plugins = value; - } - } - if (plugins && plugins.length > 0) { - if (streamIn.isSync) - return handleError(new Error("Cannot use plugins in synchronous API calls"), ""); - handlePlugins( - buildKey, - sendRequest, - sendResponse, - refs, - streamIn, - requestCallbacks, - options, - plugins, - details - ).then( - (result) => { - if (!result.ok) - return handleError(result.error, result.pluginName); - try { - buildOrContextContinue(result.requestPlugins, result.runOnEndCallbacks, result.scheduleOnDisposeCallbacks); - } catch (e) { - handleError(e, ""); - } - }, - (e) => handleError(e, "") - ); - return; - } - try { - buildOrContextContinue(null, (result, done) => done([], []), () => { - }); - } catch (e) { - handleError(e, ""); - } - function buildOrContextContinue(requestPlugins, runOnEndCallbacks, scheduleOnDisposeCallbacks) { - const writeDefault = streamIn.hasFS; - const { - entries, - flags, - write, - stdinContents, - stdinResolveDir, - absWorkingDir, - nodePaths, - mangleCache - } = flagsForBuildOptions(callName, options, isTTY, buildLogLevelDefault, writeDefault); - if (write && !streamIn.hasFS) - throw new Error(`The "write" option is unavailable in this environment`); - const request = { - command: "build", - key: buildKey, - entries, - flags, - write, - stdinContents, - stdinResolveDir, - absWorkingDir: absWorkingDir || defaultWD, - nodePaths, - context: isContext - }; - if (requestPlugins) - request.plugins = requestPlugins; - if (mangleCache) - request.mangleCache = mangleCache; - const buildResponseToResult = (response, callback2) => { - const result = { - errors: replaceDetailsInMessages(response.errors, details), - warnings: replaceDetailsInMessages(response.warnings, details), - outputFiles: void 0, - metafile: void 0, - mangleCache: void 0 - }; - const originalErrors = result.errors.slice(); - const originalWarnings = result.warnings.slice(); - if (response.outputFiles) - result.outputFiles = response.outputFiles.map(convertOutputFiles); - if (response.metafile) - result.metafile = JSON.parse(response.metafile); - if (response.mangleCache) - result.mangleCache = response.mangleCache; - if (response.writeToStdout !== void 0) - console.log(decodeUTF8(response.writeToStdout).replace(/\n$/, "")); - runOnEndCallbacks(result, (onEndErrors, onEndWarnings) => { - if (originalErrors.length > 0 || onEndErrors.length > 0) { - const error = failureErrorWithLog("Build failed", originalErrors.concat(onEndErrors), originalWarnings.concat(onEndWarnings)); - return callback2(error, null, onEndErrors, onEndWarnings); - } - callback2(null, result, onEndErrors, onEndWarnings); - }); - }; - let latestResultPromise; - let provideLatestResult; - if (isContext) - requestCallbacks["on-end"] = (id, request2) => new Promise((resolve) => { - buildResponseToResult(request2, (err, result, onEndErrors, onEndWarnings) => { - const response = { - errors: onEndErrors, - warnings: onEndWarnings - }; - if (provideLatestResult) - provideLatestResult(err, result); - latestResultPromise = void 0; - provideLatestResult = void 0; - sendResponse(id, response); - resolve(); - }); - }); - sendRequest(refs, request, (error, response) => { - if (error) - return callback(new Error(error), null); - if (!isContext) { - return buildResponseToResult(response, (err, res) => { - scheduleOnDisposeCallbacks(); - return callback(err, res); - }); - } - if (response.errors.length > 0) { - return callback(failureErrorWithLog("Context failed", response.errors, response.warnings), null); - } - let didDispose = false; - const result = { - rebuild: () => { - if (!latestResultPromise) - latestResultPromise = new Promise((resolve, reject) => { - let settlePromise; - provideLatestResult = (err, result2) => { - if (!settlePromise) - settlePromise = () => err ? reject(err) : resolve(result2); - }; - const triggerAnotherBuild = () => { - const request2 = { - command: "rebuild", - key: buildKey - }; - sendRequest(refs, request2, (error2, response2) => { - if (error2) { - reject(new Error(error2)); - } else if (settlePromise) { - settlePromise(); - } else { - triggerAnotherBuild(); - } - }); - }; - triggerAnotherBuild(); - }); - return latestResultPromise; - }, - watch: (options2 = {}) => new Promise((resolve, reject) => { - if (!streamIn.hasFS) - throw new Error(`Cannot use the "watch" API in this environment`); - const keys = {}; - checkForInvalidFlags(options2, keys, `in watch() call`); - const request2 = { - command: "watch", - key: buildKey - }; - sendRequest(refs, request2, (error2) => { - if (error2) - reject(new Error(error2)); - else - resolve(void 0); - }); - }), - serve: (options2 = {}) => new Promise((resolve, reject) => { - if (!streamIn.hasFS) - throw new Error(`Cannot use the "serve" API in this environment`); - const keys = {}; - const port = getFlag(options2, keys, "port", mustBeInteger); - const host = getFlag(options2, keys, "host", mustBeString); - const servedir = getFlag(options2, keys, "servedir", mustBeString); - const keyfile = getFlag(options2, keys, "keyfile", mustBeString); - const certfile = getFlag(options2, keys, "certfile", mustBeString); - const fallback = getFlag(options2, keys, "fallback", mustBeString); - const onRequest = getFlag(options2, keys, "onRequest", mustBeFunction); - checkForInvalidFlags(options2, keys, `in serve() call`); - const request2 = { - command: "serve", - key: buildKey, - onRequest: !!onRequest - }; - if (port !== void 0) - request2.port = port; - if (host !== void 0) - request2.host = host; - if (servedir !== void 0) - request2.servedir = servedir; - if (keyfile !== void 0) - request2.keyfile = keyfile; - if (certfile !== void 0) - request2.certfile = certfile; - if (fallback !== void 0) - request2.fallback = fallback; - sendRequest(refs, request2, (error2, response2) => { - if (error2) - return reject(new Error(error2)); - if (onRequest) { - requestCallbacks["serve-request"] = (id, request3) => { - onRequest(request3.args); - sendResponse(id, {}); - }; - } - resolve(response2); - }); - }), - cancel: () => new Promise((resolve) => { - if (didDispose) - return resolve(); - const request2 = { - command: "cancel", - key: buildKey - }; - sendRequest(refs, request2, () => { - resolve(); - }); - }), - dispose: () => new Promise((resolve) => { - if (didDispose) - return resolve(); - didDispose = true; - const request2 = { - command: "dispose", - key: buildKey - }; - sendRequest(refs, request2, () => { - resolve(); - scheduleOnDisposeCallbacks(); - refs.unref(); - }); - }) - }; - refs.ref(); - callback(null, result); - }); - } -} -var handlePlugins = (buildKey, sendRequest, sendResponse, refs, streamIn, requestCallbacks, initialOptions, plugins, details) => __async(void 0, null, function* () { - let onStartCallbacks = []; - let onEndCallbacks = []; - let onResolveCallbacks = {}; - let onLoadCallbacks = {}; - let onDisposeCallbacks = []; - let nextCallbackID = 0; - let i = 0; - let requestPlugins = []; - let isSetupDone = false; - plugins = [...plugins]; - for (let item of plugins) { - let keys = {}; - if (typeof item !== "object") - throw new Error(`Plugin at index ${i} must be an object`); - const name = getFlag(item, keys, "name", mustBeString); - if (typeof name !== "string" || name === "") - throw new Error(`Plugin at index ${i} is missing a name`); - try { - let setup = getFlag(item, keys, "setup", mustBeFunction); - if (typeof setup !== "function") - throw new Error(`Plugin is missing a setup function`); - checkForInvalidFlags(item, keys, `on plugin ${quote(name)}`); - let plugin = { - name, - onStart: false, - onEnd: false, - onResolve: [], - onLoad: [] - }; - i++; - let resolve = (path, options = {}) => { - if (!isSetupDone) - throw new Error('Cannot call "resolve" before plugin setup has completed'); - if (typeof path !== "string") - throw new Error(`The path to resolve must be a string`); - let keys2 = /* @__PURE__ */ Object.create(null); - let pluginName = getFlag(options, keys2, "pluginName", mustBeString); - let importer = getFlag(options, keys2, "importer", mustBeString); - let namespace = getFlag(options, keys2, "namespace", mustBeString); - let resolveDir = getFlag(options, keys2, "resolveDir", mustBeString); - let kind = getFlag(options, keys2, "kind", mustBeString); - let pluginData = getFlag(options, keys2, "pluginData", canBeAnything); - checkForInvalidFlags(options, keys2, "in resolve() call"); - return new Promise((resolve2, reject) => { - const request = { - command: "resolve", - path, - key: buildKey, - pluginName: name - }; - if (pluginName != null) - request.pluginName = pluginName; - if (importer != null) - request.importer = importer; - if (namespace != null) - request.namespace = namespace; - if (resolveDir != null) - request.resolveDir = resolveDir; - if (kind != null) - request.kind = kind; - else - throw new Error(`Must specify "kind" when calling "resolve"`); - if (pluginData != null) - request.pluginData = details.store(pluginData); - sendRequest(refs, request, (error, response) => { - if (error !== null) - reject(new Error(error)); - else - resolve2({ - errors: replaceDetailsInMessages(response.errors, details), - warnings: replaceDetailsInMessages(response.warnings, details), - path: response.path, - external: response.external, - sideEffects: response.sideEffects, - namespace: response.namespace, - suffix: response.suffix, - pluginData: details.load(response.pluginData) - }); - }); - }); - }; - let promise = setup({ - initialOptions, - resolve, - onStart(callback) { - let registeredText = `This error came from the "onStart" callback registered here:`; - let registeredNote = extractCallerV8(new Error(registeredText), streamIn, "onStart"); - onStartCallbacks.push({ name, callback, note: registeredNote }); - plugin.onStart = true; - }, - onEnd(callback) { - let registeredText = `This error came from the "onEnd" callback registered here:`; - let registeredNote = extractCallerV8(new Error(registeredText), streamIn, "onEnd"); - onEndCallbacks.push({ name, callback, note: registeredNote }); - plugin.onEnd = true; - }, - onResolve(options, callback) { - let registeredText = `This error came from the "onResolve" callback registered here:`; - let registeredNote = extractCallerV8(new Error(registeredText), streamIn, "onResolve"); - let keys2 = {}; - let filter = getFlag(options, keys2, "filter", mustBeRegExp); - let namespace = getFlag(options, keys2, "namespace", mustBeString); - checkForInvalidFlags(options, keys2, `in onResolve() call for plugin ${quote(name)}`); - if (filter == null) - throw new Error(`onResolve() call is missing a filter`); - let id = nextCallbackID++; - onResolveCallbacks[id] = { name, callback, note: registeredNote }; - plugin.onResolve.push({ id, filter: filter.source, namespace: namespace || "" }); - }, - onLoad(options, callback) { - let registeredText = `This error came from the "onLoad" callback registered here:`; - let registeredNote = extractCallerV8(new Error(registeredText), streamIn, "onLoad"); - let keys2 = {}; - let filter = getFlag(options, keys2, "filter", mustBeRegExp); - let namespace = getFlag(options, keys2, "namespace", mustBeString); - checkForInvalidFlags(options, keys2, `in onLoad() call for plugin ${quote(name)}`); - if (filter == null) - throw new Error(`onLoad() call is missing a filter`); - let id = nextCallbackID++; - onLoadCallbacks[id] = { name, callback, note: registeredNote }; - plugin.onLoad.push({ id, filter: filter.source, namespace: namespace || "" }); - }, - onDispose(callback) { - onDisposeCallbacks.push(callback); - }, - esbuild: streamIn.esbuild - }); - if (promise) - yield promise; - requestPlugins.push(plugin); - } catch (e) { - return { ok: false, error: e, pluginName: name }; - } - } - requestCallbacks["on-start"] = (id, request) => __async(void 0, null, function* () { - let response = { errors: [], warnings: [] }; - yield Promise.all(onStartCallbacks.map((_0) => __async(void 0, [_0], function* ({ name, callback, note }) { - try { - let result = yield callback(); - if (result != null) { - if (typeof result !== "object") - throw new Error(`Expected onStart() callback in plugin ${quote(name)} to return an object`); - let keys = {}; - let errors = getFlag(result, keys, "errors", mustBeArray); - let warnings = getFlag(result, keys, "warnings", mustBeArray); - checkForInvalidFlags(result, keys, `from onStart() callback in plugin ${quote(name)}`); - if (errors != null) - response.errors.push(...sanitizeMessages(errors, "errors", details, name)); - if (warnings != null) - response.warnings.push(...sanitizeMessages(warnings, "warnings", details, name)); - } - } catch (e) { - response.errors.push(extractErrorMessageV8(e, streamIn, details, note && note(), name)); - } - }))); - sendResponse(id, response); - }); - requestCallbacks["on-resolve"] = (id, request) => __async(void 0, null, function* () { - let response = {}, name = "", callback, note; - for (let id2 of request.ids) { - try { - ({ name, callback, note } = onResolveCallbacks[id2]); - let result = yield callback({ - path: request.path, - importer: request.importer, - namespace: request.namespace, - resolveDir: request.resolveDir, - kind: request.kind, - pluginData: details.load(request.pluginData) - }); - if (result != null) { - if (typeof result !== "object") - throw new Error(`Expected onResolve() callback in plugin ${quote(name)} to return an object`); - let keys = {}; - let pluginName = getFlag(result, keys, "pluginName", mustBeString); - let path = getFlag(result, keys, "path", mustBeString); - let namespace = getFlag(result, keys, "namespace", mustBeString); - let suffix = getFlag(result, keys, "suffix", mustBeString); - let external = getFlag(result, keys, "external", mustBeBoolean); - let sideEffects = getFlag(result, keys, "sideEffects", mustBeBoolean); - let pluginData = getFlag(result, keys, "pluginData", canBeAnything); - let errors = getFlag(result, keys, "errors", mustBeArray); - let warnings = getFlag(result, keys, "warnings", mustBeArray); - let watchFiles = getFlag(result, keys, "watchFiles", mustBeArray); - let watchDirs = getFlag(result, keys, "watchDirs", mustBeArray); - checkForInvalidFlags(result, keys, `from onResolve() callback in plugin ${quote(name)}`); - response.id = id2; - if (pluginName != null) - response.pluginName = pluginName; - if (path != null) - response.path = path; - if (namespace != null) - response.namespace = namespace; - if (suffix != null) - response.suffix = suffix; - if (external != null) - response.external = external; - if (sideEffects != null) - response.sideEffects = sideEffects; - if (pluginData != null) - response.pluginData = details.store(pluginData); - if (errors != null) - response.errors = sanitizeMessages(errors, "errors", details, name); - if (warnings != null) - response.warnings = sanitizeMessages(warnings, "warnings", details, name); - if (watchFiles != null) - response.watchFiles = sanitizeStringArray(watchFiles, "watchFiles"); - if (watchDirs != null) - response.watchDirs = sanitizeStringArray(watchDirs, "watchDirs"); - break; - } - } catch (e) { - response = { id: id2, errors: [extractErrorMessageV8(e, streamIn, details, note && note(), name)] }; - break; - } - } - sendResponse(id, response); - }); - requestCallbacks["on-load"] = (id, request) => __async(void 0, null, function* () { - let response = {}, name = "", callback, note; - for (let id2 of request.ids) { - try { - ({ name, callback, note } = onLoadCallbacks[id2]); - let result = yield callback({ - path: request.path, - namespace: request.namespace, - suffix: request.suffix, - pluginData: details.load(request.pluginData) - }); - if (result != null) { - if (typeof result !== "object") - throw new Error(`Expected onLoad() callback in plugin ${quote(name)} to return an object`); - let keys = {}; - let pluginName = getFlag(result, keys, "pluginName", mustBeString); - let contents = getFlag(result, keys, "contents", mustBeStringOrUint8Array); - let resolveDir = getFlag(result, keys, "resolveDir", mustBeString); - let pluginData = getFlag(result, keys, "pluginData", canBeAnything); - let loader = getFlag(result, keys, "loader", mustBeString); - let errors = getFlag(result, keys, "errors", mustBeArray); - let warnings = getFlag(result, keys, "warnings", mustBeArray); - let watchFiles = getFlag(result, keys, "watchFiles", mustBeArray); - let watchDirs = getFlag(result, keys, "watchDirs", mustBeArray); - checkForInvalidFlags(result, keys, `from onLoad() callback in plugin ${quote(name)}`); - response.id = id2; - if (pluginName != null) - response.pluginName = pluginName; - if (contents instanceof Uint8Array) - response.contents = contents; - else if (contents != null) - response.contents = encodeUTF8(contents); - if (resolveDir != null) - response.resolveDir = resolveDir; - if (pluginData != null) - response.pluginData = details.store(pluginData); - if (loader != null) - response.loader = loader; - if (errors != null) - response.errors = sanitizeMessages(errors, "errors", details, name); - if (warnings != null) - response.warnings = sanitizeMessages(warnings, "warnings", details, name); - if (watchFiles != null) - response.watchFiles = sanitizeStringArray(watchFiles, "watchFiles"); - if (watchDirs != null) - response.watchDirs = sanitizeStringArray(watchDirs, "watchDirs"); - break; - } - } catch (e) { - response = { id: id2, errors: [extractErrorMessageV8(e, streamIn, details, note && note(), name)] }; - break; - } - } - sendResponse(id, response); - }); - let runOnEndCallbacks = (result, done) => done([], []); - if (onEndCallbacks.length > 0) { - runOnEndCallbacks = (result, done) => { - (() => __async(void 0, null, function* () { - const onEndErrors = []; - const onEndWarnings = []; - for (const { name, callback, note } of onEndCallbacks) { - let newErrors; - let newWarnings; - try { - const value = yield callback(result); - if (value != null) { - if (typeof value !== "object") - throw new Error(`Expected onEnd() callback in plugin ${quote(name)} to return an object`); - let keys = {}; - let errors = getFlag(value, keys, "errors", mustBeArray); - let warnings = getFlag(value, keys, "warnings", mustBeArray); - checkForInvalidFlags(value, keys, `from onEnd() callback in plugin ${quote(name)}`); - if (errors != null) - newErrors = sanitizeMessages(errors, "errors", details, name); - if (warnings != null) - newWarnings = sanitizeMessages(warnings, "warnings", details, name); - } - } catch (e) { - newErrors = [extractErrorMessageV8(e, streamIn, details, note && note(), name)]; - } - if (newErrors) { - onEndErrors.push(...newErrors); - try { - result.errors.push(...newErrors); - } catch (e) { - } - } - if (newWarnings) { - onEndWarnings.push(...newWarnings); - try { - result.warnings.push(...newWarnings); - } catch (e) { - } - } - } - done(onEndErrors, onEndWarnings); - }))(); - }; - } - let scheduleOnDisposeCallbacks = () => { - for (const cb of onDisposeCallbacks) { - setTimeout(() => cb(), 0); - } - }; - isSetupDone = true; - return { - ok: true, - requestPlugins, - runOnEndCallbacks, - scheduleOnDisposeCallbacks - }; -}); -function createObjectStash() { - const map = /* @__PURE__ */ new Map(); - let nextID = 0; - return { - load(id) { - return map.get(id); - }, - store(value) { - if (value === void 0) - return -1; - const id = nextID++; - map.set(id, value); - return id; - } - }; -} -function extractCallerV8(e, streamIn, ident) { - let note; - let tried = false; - return () => { - if (tried) - return note; - tried = true; - try { - let lines = (e.stack + "").split("\n"); - lines.splice(1, 1); - let location2 = parseStackLinesV8(streamIn, lines, ident); - if (location2) { - note = { text: e.message, location: location2 }; - return note; - } - } catch (e2) { - } - }; -} -function extractErrorMessageV8(e, streamIn, stash, note, pluginName) { - let text = "Internal error"; - let location2 = null; - try { - text = (e && e.message || e) + ""; - } catch (e2) { - } - try { - location2 = parseStackLinesV8(streamIn, (e.stack + "").split("\n"), ""); - } catch (e2) { - } - return { id: "", pluginName, text, location: location2, notes: note ? [note] : [], detail: stash ? stash.store(e) : -1 }; -} -function parseStackLinesV8(streamIn, lines, ident) { - let at = " at "; - if (streamIn.readFileSync && !lines[0].startsWith(at) && lines[1].startsWith(at)) { - for (let i = 1; i < lines.length; i++) { - let line = lines[i]; - if (!line.startsWith(at)) - continue; - line = line.slice(at.length); - while (true) { - let match = /^(?:new |async )?\S+ \((.*)\)$/.exec(line); - if (match) { - line = match[1]; - continue; - } - match = /^eval at \S+ \((.*)\)(?:, \S+:\d+:\d+)?$/.exec(line); - if (match) { - line = match[1]; - continue; - } - match = /^(\S+):(\d+):(\d+)$/.exec(line); - if (match) { - let contents; - try { - contents = streamIn.readFileSync(match[1], "utf8"); - } catch (e) { - break; - } - let lineText = contents.split(/\r\n|\r|\n|\u2028|\u2029/)[+match[2] - 1] || ""; - let column = +match[3] - 1; - let length = lineText.slice(column, column + ident.length) === ident ? ident.length : 0; - return { - file: match[1], - namespace: "file", - line: +match[2], - column: encodeUTF8(lineText.slice(0, column)).length, - length: encodeUTF8(lineText.slice(column, column + length)).length, - lineText: lineText + "\n" + lines.slice(1).join("\n"), - suggestion: "" - }; - } - break; - } - } - } - return null; -} -function failureErrorWithLog(text, errors, warnings) { - let limit = 5; - let summary = errors.length < 1 ? "" : ` with ${errors.length} error${errors.length < 2 ? "" : "s"}:` + errors.slice(0, limit + 1).map((e, i) => { - if (i === limit) - return "\n..."; - if (!e.location) - return ` -error: ${e.text}`; - let { file, line, column } = e.location; - let pluginText = e.pluginName ? `[plugin: ${e.pluginName}] ` : ""; - return ` -${file}:${line}:${column}: ERROR: ${pluginText}${e.text}`; - }).join(""); - let error = new Error(`${text}${summary}`); - error.errors = errors; - error.warnings = warnings; - return error; -} -function replaceDetailsInMessages(messages, stash) { - for (const message of messages) { - message.detail = stash.load(message.detail); - } - return messages; -} -function sanitizeLocation(location2, where) { - if (location2 == null) - return null; - let keys = {}; - let file = getFlag(location2, keys, "file", mustBeString); - let namespace = getFlag(location2, keys, "namespace", mustBeString); - let line = getFlag(location2, keys, "line", mustBeInteger); - let column = getFlag(location2, keys, "column", mustBeInteger); - let length = getFlag(location2, keys, "length", mustBeInteger); - let lineText = getFlag(location2, keys, "lineText", mustBeString); - let suggestion = getFlag(location2, keys, "suggestion", mustBeString); - checkForInvalidFlags(location2, keys, where); - return { - file: file || "", - namespace: namespace || "", - line: line || 0, - column: column || 0, - length: length || 0, - lineText: lineText || "", - suggestion: suggestion || "" - }; -} -function sanitizeMessages(messages, property, stash, fallbackPluginName) { - let messagesClone = []; - let index = 0; - for (const message of messages) { - let keys = {}; - let id = getFlag(message, keys, "id", mustBeString); - let pluginName = getFlag(message, keys, "pluginName", mustBeString); - let text = getFlag(message, keys, "text", mustBeString); - let location2 = getFlag(message, keys, "location", mustBeObjectOrNull); - let notes = getFlag(message, keys, "notes", mustBeArray); - let detail = getFlag(message, keys, "detail", canBeAnything); - let where = `in element ${index} of "${property}"`; - checkForInvalidFlags(message, keys, where); - let notesClone = []; - if (notes) { - for (const note of notes) { - let noteKeys = {}; - let noteText = getFlag(note, noteKeys, "text", mustBeString); - let noteLocation = getFlag(note, noteKeys, "location", mustBeObjectOrNull); - checkForInvalidFlags(note, noteKeys, where); - notesClone.push({ - text: noteText || "", - location: sanitizeLocation(noteLocation, where) - }); - } - } - messagesClone.push({ - id: id || "", - pluginName: pluginName || fallbackPluginName, - text: text || "", - location: sanitizeLocation(location2, where), - notes: notesClone, - detail: stash ? stash.store(detail) : -1 - }); - index++; - } - return messagesClone; -} -function sanitizeStringArray(values, property) { - const result = []; - for (const value of values) { - if (typeof value !== "string") - throw new Error(`${quote(property)} must be an array of strings`); - result.push(value); - } - return result; -} -function convertOutputFiles({ path, contents, hash }) { - let text = null; - return { - path, - contents, - hash, - get text() { - const binary = this.contents; - if (text === null || binary !== contents) { - contents = binary; - text = decodeUTF8(binary); - } - return text; - } - }; -} - -// lib/npm/browser.ts -var version = "0.19.0"; -var build = (options) => ensureServiceIsRunning().build(options); -var context = (options) => ensureServiceIsRunning().context(options); -var transform = (input, options) => ensureServiceIsRunning().transform(input, options); -var formatMessages = (messages, options) => ensureServiceIsRunning().formatMessages(messages, options); -var analyzeMetafile = (metafile, options) => ensureServiceIsRunning().analyzeMetafile(metafile, options); -var buildSync = () => { - throw new Error(`The "buildSync" API only works in node`); -}; -var transformSync = () => { - throw new Error(`The "transformSync" API only works in node`); -}; -var formatMessagesSync = () => { - throw new Error(`The "formatMessagesSync" API only works in node`); -}; -var analyzeMetafileSync = () => { - throw new Error(`The "analyzeMetafileSync" API only works in node`); -}; -var initializePromise; -var longLivedService; -var ensureServiceIsRunning = () => { - if (longLivedService) - return longLivedService; - if (initializePromise) - throw new Error('You need to wait for the promise returned from "initialize" to be resolved before calling this'); - throw new Error('You need to call "initialize" before calling this'); -}; -var initialize = (options) => { - options = validateInitializeOptions(options || {}); - let wasmURL = options.wasmURL; - let wasmModule = options.wasmModule; - let useWorker = options.worker !== false; - if (!wasmURL && !wasmModule) - throw new Error('Must provide either the "wasmURL" option or the "wasmModule" option'); - if (initializePromise) - throw new Error('Cannot call "initialize" more than once'); - initializePromise = startRunningService(wasmURL || "", wasmModule, useWorker); - initializePromise.catch(() => { - initializePromise = void 0; - }); - return initializePromise; -}; -var startRunningService = (wasmURL, wasmModule, useWorker) => __async(void 0, null, function* () { - let worker; - if (useWorker) { - let blob = new Blob([`onmessage=${'((postMessage) => {\n // Copyright 2018 The Go Authors. All rights reserved.\n // Use of this source code is governed by a BSD-style\n // license that can be found in the LICENSE file.\n var __async = (__this, __arguments, generator) => {\n return new Promise((resolve, reject) => {\n var fulfilled = (value) => {\n try {\n step(generator.next(value));\n } catch (e) {\n reject(e);\n }\n };\n var rejected = (value) => {\n try {\n step(generator.throw(value));\n } catch (e) {\n reject(e);\n }\n };\n var step = (x) => x.done ? resolve(x.value) : Promise.resolve(x.value).then(fulfilled, rejected);\n step((generator = generator.apply(__this, __arguments)).next());\n });\n };\n let onmessage;\n let globalThis = {};\n for (let o = self; o; o = Object.getPrototypeOf(o))\n for (let k of Object.getOwnPropertyNames(o))\n if (!(k in globalThis))\n Object.defineProperty(globalThis, k, { get: () => self[k] });\n "use strict";\n (() => {\n const enosys = () => {\n const err = new Error("not implemented");\n err.code = "ENOSYS";\n return err;\n };\n if (!globalThis.fs) {\n let outputBuf = "";\n globalThis.fs = {\n constants: { O_WRONLY: -1, O_RDWR: -1, O_CREAT: -1, O_TRUNC: -1, O_APPEND: -1, O_EXCL: -1 },\n // unused\n writeSync(fd, buf) {\n outputBuf += decoder.decode(buf);\n const nl = outputBuf.lastIndexOf("\\n");\n if (nl != -1) {\n console.log(outputBuf.substring(0, nl));\n outputBuf = outputBuf.substring(nl + 1);\n }\n return buf.length;\n },\n write(fd, buf, offset, length, position, callback) {\n if (offset !== 0 || length !== buf.length || position !== null) {\n callback(enosys());\n return;\n }\n const n = this.writeSync(fd, buf);\n callback(null, n);\n },\n chmod(path, mode, callback) {\n callback(enosys());\n },\n chown(path, uid, gid, callback) {\n callback(enosys());\n },\n close(fd, callback) {\n callback(enosys());\n },\n fchmod(fd, mode, callback) {\n callback(enosys());\n },\n fchown(fd, uid, gid, callback) {\n callback(enosys());\n },\n fstat(fd, callback) {\n callback(enosys());\n },\n fsync(fd, callback) {\n callback(null);\n },\n ftruncate(fd, length, callback) {\n callback(enosys());\n },\n lchown(path, uid, gid, callback) {\n callback(enosys());\n },\n link(path, link, callback) {\n callback(enosys());\n },\n lstat(path, callback) {\n callback(enosys());\n },\n mkdir(path, perm, callback) {\n callback(enosys());\n },\n open(path, flags, mode, callback) {\n callback(enosys());\n },\n read(fd, buffer, offset, length, position, callback) {\n callback(enosys());\n },\n readdir(path, callback) {\n callback(enosys());\n },\n readlink(path, callback) {\n callback(enosys());\n },\n rename(from, to, callback) {\n callback(enosys());\n },\n rmdir(path, callback) {\n callback(enosys());\n },\n stat(path, callback) {\n callback(enosys());\n },\n symlink(path, link, callback) {\n callback(enosys());\n },\n truncate(path, length, callback) {\n callback(enosys());\n },\n unlink(path, callback) {\n callback(enosys());\n },\n utimes(path, atime, mtime, callback) {\n callback(enosys());\n }\n };\n }\n if (!globalThis.process) {\n globalThis.process = {\n getuid() {\n return -1;\n },\n getgid() {\n return -1;\n },\n geteuid() {\n return -1;\n },\n getegid() {\n return -1;\n },\n getgroups() {\n throw enosys();\n },\n pid: -1,\n ppid: -1,\n umask() {\n throw enosys();\n },\n cwd() {\n throw enosys();\n },\n chdir() {\n throw enosys();\n }\n };\n }\n if (!globalThis.crypto) {\n throw new Error("globalThis.crypto is not available, polyfill required (crypto.getRandomValues only)");\n }\n if (!globalThis.performance) {\n throw new Error("globalThis.performance is not available, polyfill required (performance.now only)");\n }\n if (!globalThis.TextEncoder) {\n throw new Error("globalThis.TextEncoder is not available, polyfill required");\n }\n if (!globalThis.TextDecoder) {\n throw new Error("globalThis.TextDecoder is not available, polyfill required");\n }\n const encoder = new TextEncoder("utf-8");\n const decoder = new TextDecoder("utf-8");\n globalThis.Go = class {\n constructor() {\n this.argv = ["js"];\n this.env = {};\n this.exit = (code) => {\n if (code !== 0) {\n console.warn("exit code:", code);\n }\n };\n this._exitPromise = new Promise((resolve) => {\n this._resolveExitPromise = resolve;\n });\n this._pendingEvent = null;\n this._scheduledTimeouts = /* @__PURE__ */ new Map();\n this._nextCallbackTimeoutID = 1;\n const setInt64 = (addr, v) => {\n this.mem.setUint32(addr + 0, v, true);\n this.mem.setUint32(addr + 4, Math.floor(v / 4294967296), true);\n };\n const getInt64 = (addr) => {\n const low = this.mem.getUint32(addr + 0, true);\n const high = this.mem.getInt32(addr + 4, true);\n return low + high * 4294967296;\n };\n const loadValue = (addr) => {\n const f = this.mem.getFloat64(addr, true);\n if (f === 0) {\n return void 0;\n }\n if (!isNaN(f)) {\n return f;\n }\n const id = this.mem.getUint32(addr, true);\n return this._values[id];\n };\n const storeValue = (addr, v) => {\n const nanHead = 2146959360;\n if (typeof v === "number" && v !== 0) {\n if (isNaN(v)) {\n this.mem.setUint32(addr + 4, nanHead, true);\n this.mem.setUint32(addr, 0, true);\n return;\n }\n this.mem.setFloat64(addr, v, true);\n return;\n }\n if (v === void 0) {\n this.mem.setFloat64(addr, 0, true);\n return;\n }\n let id = this._ids.get(v);\n if (id === void 0) {\n id = this._idPool.pop();\n if (id === void 0) {\n id = this._values.length;\n }\n this._values[id] = v;\n this._goRefCounts[id] = 0;\n this._ids.set(v, id);\n }\n this._goRefCounts[id]++;\n let typeFlag = 0;\n switch (typeof v) {\n case "object":\n if (v !== null) {\n typeFlag = 1;\n }\n break;\n case "string":\n typeFlag = 2;\n break;\n case "symbol":\n typeFlag = 3;\n break;\n case "function":\n typeFlag = 4;\n break;\n }\n this.mem.setUint32(addr + 4, nanHead | typeFlag, true);\n this.mem.setUint32(addr, id, true);\n };\n const loadSlice = (addr) => {\n const array = getInt64(addr + 0);\n const len = getInt64(addr + 8);\n return new Uint8Array(this._inst.exports.mem.buffer, array, len);\n };\n const loadSliceOfValues = (addr) => {\n const array = getInt64(addr + 0);\n const len = getInt64(addr + 8);\n const a = new Array(len);\n for (let i = 0; i < len; i++) {\n a[i] = loadValue(array + i * 8);\n }\n return a;\n };\n const loadString = (addr) => {\n const saddr = getInt64(addr + 0);\n const len = getInt64(addr + 8);\n return decoder.decode(new DataView(this._inst.exports.mem.buffer, saddr, len));\n };\n const timeOrigin = Date.now() - performance.now();\n this.importObject = {\n go: {\n // Go\'s SP does not change as long as no Go code is running. Some operations (e.g. calls, getters and setters)\n // may synchronously trigger a Go event handler. This makes Go code get executed in the middle of the imported\n // function. A goroutine can switch to a new stack if the current stack is too small (see morestack function).\n // This changes the SP, thus we have to update the SP used by the imported function.\n // func wasmExit(code int32)\n "runtime.wasmExit": (sp) => {\n sp >>>= 0;\n const code = this.mem.getInt32(sp + 8, true);\n this.exited = true;\n delete this._inst;\n delete this._values;\n delete this._goRefCounts;\n delete this._ids;\n delete this._idPool;\n this.exit(code);\n },\n // func wasmWrite(fd uintptr, p unsafe.Pointer, n int32)\n "runtime.wasmWrite": (sp) => {\n sp >>>= 0;\n const fd = getInt64(sp + 8);\n const p = getInt64(sp + 16);\n const n = this.mem.getInt32(sp + 24, true);\n globalThis.fs.writeSync(fd, new Uint8Array(this._inst.exports.mem.buffer, p, n));\n },\n // func resetMemoryDataView()\n "runtime.resetMemoryDataView": (sp) => {\n sp >>>= 0;\n this.mem = new DataView(this._inst.exports.mem.buffer);\n },\n // func nanotime1() int64\n "runtime.nanotime1": (sp) => {\n sp >>>= 0;\n setInt64(sp + 8, (timeOrigin + performance.now()) * 1e6);\n },\n // func walltime() (sec int64, nsec int32)\n "runtime.walltime": (sp) => {\n sp >>>= 0;\n const msec = (/* @__PURE__ */ new Date()).getTime();\n setInt64(sp + 8, msec / 1e3);\n this.mem.setInt32(sp + 16, msec % 1e3 * 1e6, true);\n },\n // func scheduleTimeoutEvent(delay int64) int32\n "runtime.scheduleTimeoutEvent": (sp) => {\n sp >>>= 0;\n const id = this._nextCallbackTimeoutID;\n this._nextCallbackTimeoutID++;\n this._scheduledTimeouts.set(id, setTimeout(\n () => {\n this._resume();\n while (this._scheduledTimeouts.has(id)) {\n console.warn("scheduleTimeoutEvent: missed timeout event");\n this._resume();\n }\n },\n getInt64(sp + 8) + 1\n // setTimeout has been seen to fire up to 1 millisecond early\n ));\n this.mem.setInt32(sp + 16, id, true);\n },\n // func clearTimeoutEvent(id int32)\n "runtime.clearTimeoutEvent": (sp) => {\n sp >>>= 0;\n const id = this.mem.getInt32(sp + 8, true);\n clearTimeout(this._scheduledTimeouts.get(id));\n this._scheduledTimeouts.delete(id);\n },\n // func getRandomData(r []byte)\n "runtime.getRandomData": (sp) => {\n sp >>>= 0;\n crypto.getRandomValues(loadSlice(sp + 8));\n },\n // func finalizeRef(v ref)\n "syscall/js.finalizeRef": (sp) => {\n sp >>>= 0;\n const id = this.mem.getUint32(sp + 8, true);\n this._goRefCounts[id]--;\n if (this._goRefCounts[id] === 0) {\n const v = this._values[id];\n this._values[id] = null;\n this._ids.delete(v);\n this._idPool.push(id);\n }\n },\n // func stringVal(value string) ref\n "syscall/js.stringVal": (sp) => {\n sp >>>= 0;\n storeValue(sp + 24, loadString(sp + 8));\n },\n // func valueGet(v ref, p string) ref\n "syscall/js.valueGet": (sp) => {\n sp >>>= 0;\n const result = Reflect.get(loadValue(sp + 8), loadString(sp + 16));\n sp = this._inst.exports.getsp() >>> 0;\n storeValue(sp + 32, result);\n },\n // func valueSet(v ref, p string, x ref)\n "syscall/js.valueSet": (sp) => {\n sp >>>= 0;\n Reflect.set(loadValue(sp + 8), loadString(sp + 16), loadValue(sp + 32));\n },\n // func valueDelete(v ref, p string)\n "syscall/js.valueDelete": (sp) => {\n sp >>>= 0;\n Reflect.deleteProperty(loadValue(sp + 8), loadString(sp + 16));\n },\n // func valueIndex(v ref, i int) ref\n "syscall/js.valueIndex": (sp) => {\n sp >>>= 0;\n storeValue(sp + 24, Reflect.get(loadValue(sp + 8), getInt64(sp + 16)));\n },\n // valueSetIndex(v ref, i int, x ref)\n "syscall/js.valueSetIndex": (sp) => {\n sp >>>= 0;\n Reflect.set(loadValue(sp + 8), getInt64(sp + 16), loadValue(sp + 24));\n },\n // func valueCall(v ref, m string, args []ref) (ref, bool)\n "syscall/js.valueCall": (sp) => {\n sp >>>= 0;\n try {\n const v = loadValue(sp + 8);\n const m = Reflect.get(v, loadString(sp + 16));\n const args = loadSliceOfValues(sp + 32);\n const result = Reflect.apply(m, v, args);\n sp = this._inst.exports.getsp() >>> 0;\n storeValue(sp + 56, result);\n this.mem.setUint8(sp + 64, 1);\n } catch (err) {\n sp = this._inst.exports.getsp() >>> 0;\n storeValue(sp + 56, err);\n this.mem.setUint8(sp + 64, 0);\n }\n },\n // func valueInvoke(v ref, args []ref) (ref, bool)\n "syscall/js.valueInvoke": (sp) => {\n sp >>>= 0;\n try {\n const v = loadValue(sp + 8);\n const args = loadSliceOfValues(sp + 16);\n const result = Reflect.apply(v, void 0, args);\n sp = this._inst.exports.getsp() >>> 0;\n storeValue(sp + 40, result);\n this.mem.setUint8(sp + 48, 1);\n } catch (err) {\n sp = this._inst.exports.getsp() >>> 0;\n storeValue(sp + 40, err);\n this.mem.setUint8(sp + 48, 0);\n }\n },\n // func valueNew(v ref, args []ref) (ref, bool)\n "syscall/js.valueNew": (sp) => {\n sp >>>= 0;\n try {\n const v = loadValue(sp + 8);\n const args = loadSliceOfValues(sp + 16);\n const result = Reflect.construct(v, args);\n sp = this._inst.exports.getsp() >>> 0;\n storeValue(sp + 40, result);\n this.mem.setUint8(sp + 48, 1);\n } catch (err) {\n sp = this._inst.exports.getsp() >>> 0;\n storeValue(sp + 40, err);\n this.mem.setUint8(sp + 48, 0);\n }\n },\n // func valueLength(v ref) int\n "syscall/js.valueLength": (sp) => {\n sp >>>= 0;\n setInt64(sp + 16, parseInt(loadValue(sp + 8).length));\n },\n // valuePrepareString(v ref) (ref, int)\n "syscall/js.valuePrepareString": (sp) => {\n sp >>>= 0;\n const str = encoder.encode(String(loadValue(sp + 8)));\n storeValue(sp + 16, str);\n setInt64(sp + 24, str.length);\n },\n // valueLoadString(v ref, b []byte)\n "syscall/js.valueLoadString": (sp) => {\n sp >>>= 0;\n const str = loadValue(sp + 8);\n loadSlice(sp + 16).set(str);\n },\n // func valueInstanceOf(v ref, t ref) bool\n "syscall/js.valueInstanceOf": (sp) => {\n sp >>>= 0;\n this.mem.setUint8(sp + 24, loadValue(sp + 8) instanceof loadValue(sp + 16) ? 1 : 0);\n },\n // func copyBytesToGo(dst []byte, src ref) (int, bool)\n "syscall/js.copyBytesToGo": (sp) => {\n sp >>>= 0;\n const dst = loadSlice(sp + 8);\n const src = loadValue(sp + 32);\n if (!(src instanceof Uint8Array || src instanceof Uint8ClampedArray)) {\n this.mem.setUint8(sp + 48, 0);\n return;\n }\n const toCopy = src.subarray(0, dst.length);\n dst.set(toCopy);\n setInt64(sp + 40, toCopy.length);\n this.mem.setUint8(sp + 48, 1);\n },\n // func copyBytesToJS(dst ref, src []byte) (int, bool)\n "syscall/js.copyBytesToJS": (sp) => {\n sp >>>= 0;\n const dst = loadValue(sp + 8);\n const src = loadSlice(sp + 16);\n if (!(dst instanceof Uint8Array || dst instanceof Uint8ClampedArray)) {\n this.mem.setUint8(sp + 48, 0);\n return;\n }\n const toCopy = src.subarray(0, dst.length);\n dst.set(toCopy);\n setInt64(sp + 40, toCopy.length);\n this.mem.setUint8(sp + 48, 1);\n },\n "debug": (value) => {\n console.log(value);\n }\n }\n };\n }\n run(instance) {\n return __async(this, null, function* () {\n if (!(instance instanceof WebAssembly.Instance)) {\n throw new Error("Go.run: WebAssembly.Instance expected");\n }\n this._inst = instance;\n this.mem = new DataView(this._inst.exports.mem.buffer);\n this._values = [\n // JS values that Go currently has references to, indexed by reference id\n NaN,\n 0,\n null,\n true,\n false,\n globalThis,\n this\n ];\n this._goRefCounts = new Array(this._values.length).fill(Infinity);\n this._ids = /* @__PURE__ */ new Map([\n // mapping from JS values to reference ids\n [0, 1],\n [null, 2],\n [true, 3],\n [false, 4],\n [globalThis, 5],\n [this, 6]\n ]);\n this._idPool = [];\n this.exited = false;\n let offset = 4096;\n const strPtr = (str) => {\n const ptr = offset;\n const bytes = encoder.encode(str + "\\0");\n new Uint8Array(this.mem.buffer, offset, bytes.length).set(bytes);\n offset += bytes.length;\n if (offset % 8 !== 0) {\n offset += 8 - offset % 8;\n }\n return ptr;\n };\n const argc = this.argv.length;\n const argvPtrs = [];\n this.argv.forEach((arg) => {\n argvPtrs.push(strPtr(arg));\n });\n argvPtrs.push(0);\n const keys = Object.keys(this.env).sort();\n keys.forEach((key) => {\n argvPtrs.push(strPtr(`${key}=${this.env[key]}`));\n });\n argvPtrs.push(0);\n const argv = offset;\n argvPtrs.forEach((ptr) => {\n this.mem.setUint32(offset, ptr, true);\n this.mem.setUint32(offset + 4, 0, true);\n offset += 8;\n });\n const wasmMinDataAddr = 4096 + 8192;\n if (offset >= wasmMinDataAddr) {\n throw new Error("total length of command line and environment variables exceeds limit");\n }\n this._inst.exports.run(argc, argv);\n if (this.exited) {\n this._resolveExitPromise();\n }\n yield this._exitPromise;\n });\n }\n _resume() {\n if (this.exited) {\n throw new Error("Go program has already exited");\n }\n this._inst.exports.resume();\n if (this.exited) {\n this._resolveExitPromise();\n }\n }\n _makeFuncWrapper(id) {\n const go = this;\n return function() {\n const event = { id, this: this, args: arguments };\n go._pendingEvent = event;\n go._resume();\n return event.result;\n };\n }\n };\n })();\n onmessage = ({ data: wasm }) => {\n let decoder = new TextDecoder();\n let fs = globalThis.fs;\n let stderr = "";\n fs.writeSync = (fd, buffer) => {\n if (fd === 1) {\n postMessage(buffer);\n } else if (fd === 2) {\n stderr += decoder.decode(buffer);\n let parts = stderr.split("\\n");\n if (parts.length > 1)\n console.log(parts.slice(0, -1).join("\\n"));\n stderr = parts[parts.length - 1];\n } else {\n throw new Error("Bad write");\n }\n return buffer.length;\n };\n let stdin = [];\n let resumeStdin;\n let stdinPos = 0;\n onmessage = ({ data }) => {\n if (data.length > 0) {\n stdin.push(data);\n if (resumeStdin)\n resumeStdin();\n }\n };\n fs.read = (fd, buffer, offset, length, position, callback) => {\n if (fd !== 0 || offset !== 0 || length !== buffer.length || position !== null) {\n throw new Error("Bad read");\n }\n if (stdin.length === 0) {\n resumeStdin = () => fs.read(fd, buffer, offset, length, position, callback);\n return;\n }\n let first = stdin[0];\n let count = Math.max(0, Math.min(length, first.length - stdinPos));\n buffer.set(first.subarray(stdinPos, stdinPos + count), offset);\n stdinPos += count;\n if (stdinPos === first.length) {\n stdin.shift();\n stdinPos = 0;\n }\n callback(null, count);\n };\n let go = new globalThis.Go();\n go.argv = ["", `--service=${"0.19.0"}`];\n tryToInstantiateModule(wasm, go).then(\n (instance) => {\n postMessage(null);\n go.run(instance);\n },\n (error) => {\n postMessage(error);\n }\n );\n };\n function tryToInstantiateModule(wasm, go) {\n return __async(this, null, function* () {\n if (wasm instanceof WebAssembly.Module) {\n return WebAssembly.instantiate(wasm, go.importObject);\n }\n const res = yield fetch(wasm);\n if (!res.ok)\n throw new Error(`Failed to download ${JSON.stringify(wasm)}`);\n if ("instantiateStreaming" in WebAssembly && /^application\\/wasm($|;)/i.test(res.headers.get("Content-Type") || "")) {\n const result2 = yield WebAssembly.instantiateStreaming(res, go.importObject);\n return result2.instance;\n }\n const bytes = yield res.arrayBuffer();\n const result = yield WebAssembly.instantiate(bytes, go.importObject);\n return result.instance;\n });\n }\n return (m) => onmessage(m);\n })'}(postMessage)`], { type: "text/javascript" }); - worker = new Worker(URL.createObjectURL(blob)); - } else { - let onmessage = ((postMessage) => { - // Copyright 2018 The Go Authors. All rights reserved. - // Use of this source code is governed by a BSD-style - // license that can be found in the LICENSE file. - var __async = (__this, __arguments, generator) => { - return new Promise((resolve, reject) => { - var fulfilled = (value) => { - try { - step(generator.next(value)); - } catch (e) { - reject(e); - } - }; - var rejected = (value) => { - try { - step(generator.throw(value)); - } catch (e) { - reject(e); - } - }; - var step = (x) => x.done ? resolve(x.value) : Promise.resolve(x.value).then(fulfilled, rejected); - step((generator = generator.apply(__this, __arguments)).next()); - }); - }; - let onmessage; - let globalThis = {}; - for (let o = self; o; o = Object.getPrototypeOf(o)) - for (let k of Object.getOwnPropertyNames(o)) - if (!(k in globalThis)) - Object.defineProperty(globalThis, k, { get: () => self[k] }); - "use strict"; - (() => { - const enosys = () => { - const err = new Error("not implemented"); - err.code = "ENOSYS"; - return err; - }; - if (!globalThis.fs) { - let outputBuf = ""; - globalThis.fs = { - constants: { O_WRONLY: -1, O_RDWR: -1, O_CREAT: -1, O_TRUNC: -1, O_APPEND: -1, O_EXCL: -1 }, - // unused - writeSync(fd, buf) { - outputBuf += decoder.decode(buf); - const nl = outputBuf.lastIndexOf("\n"); - if (nl != -1) { - console.log(outputBuf.substring(0, nl)); - outputBuf = outputBuf.substring(nl + 1); - } - return buf.length; - }, - write(fd, buf, offset, length, position, callback) { - if (offset !== 0 || length !== buf.length || position !== null) { - callback(enosys()); - return; - } - const n = this.writeSync(fd, buf); - callback(null, n); - }, - chmod(path, mode, callback) { - callback(enosys()); - }, - chown(path, uid, gid, callback) { - callback(enosys()); - }, - close(fd, callback) { - callback(enosys()); - }, - fchmod(fd, mode, callback) { - callback(enosys()); - }, - fchown(fd, uid, gid, callback) { - callback(enosys()); - }, - fstat(fd, callback) { - callback(enosys()); - }, - fsync(fd, callback) { - callback(null); - }, - ftruncate(fd, length, callback) { - callback(enosys()); - }, - lchown(path, uid, gid, callback) { - callback(enosys()); - }, - link(path, link, callback) { - callback(enosys()); - }, - lstat(path, callback) { - callback(enosys()); - }, - mkdir(path, perm, callback) { - callback(enosys()); - }, - open(path, flags, mode, callback) { - callback(enosys()); - }, - read(fd, buffer, offset, length, position, callback) { - callback(enosys()); - }, - readdir(path, callback) { - callback(enosys()); - }, - readlink(path, callback) { - callback(enosys()); - }, - rename(from, to, callback) { - callback(enosys()); - }, - rmdir(path, callback) { - callback(enosys()); - }, - stat(path, callback) { - callback(enosys()); - }, - symlink(path, link, callback) { - callback(enosys()); - }, - truncate(path, length, callback) { - callback(enosys()); - }, - unlink(path, callback) { - callback(enosys()); - }, - utimes(path, atime, mtime, callback) { - callback(enosys()); - } - }; - } - if (!globalThis.process) { - globalThis.process = { - getuid() { - return -1; - }, - getgid() { - return -1; - }, - geteuid() { - return -1; - }, - getegid() { - return -1; - }, - getgroups() { - throw enosys(); - }, - pid: -1, - ppid: -1, - umask() { - throw enosys(); - }, - cwd() { - throw enosys(); - }, - chdir() { - throw enosys(); - } - }; - } - if (!globalThis.crypto) { - throw new Error("globalThis.crypto is not available, polyfill required (crypto.getRandomValues only)"); - } - if (!globalThis.performance) { - throw new Error("globalThis.performance is not available, polyfill required (performance.now only)"); - } - if (!globalThis.TextEncoder) { - throw new Error("globalThis.TextEncoder is not available, polyfill required"); - } - if (!globalThis.TextDecoder) { - throw new Error("globalThis.TextDecoder is not available, polyfill required"); - } - const encoder = new TextEncoder("utf-8"); - const decoder = new TextDecoder("utf-8"); - globalThis.Go = class { - constructor() { - this.argv = ["js"]; - this.env = {}; - this.exit = (code) => { - if (code !== 0) { - console.warn("exit code:", code); - } - }; - this._exitPromise = new Promise((resolve) => { - this._resolveExitPromise = resolve; - }); - this._pendingEvent = null; - this._scheduledTimeouts = /* @__PURE__ */ new Map(); - this._nextCallbackTimeoutID = 1; - const setInt64 = (addr, v) => { - this.mem.setUint32(addr + 0, v, true); - this.mem.setUint32(addr + 4, Math.floor(v / 4294967296), true); - }; - const getInt64 = (addr) => { - const low = this.mem.getUint32(addr + 0, true); - const high = this.mem.getInt32(addr + 4, true); - return low + high * 4294967296; - }; - const loadValue = (addr) => { - const f = this.mem.getFloat64(addr, true); - if (f === 0) { - return void 0; - } - if (!isNaN(f)) { - return f; - } - const id = this.mem.getUint32(addr, true); - return this._values[id]; - }; - const storeValue = (addr, v) => { - const nanHead = 2146959360; - if (typeof v === "number" && v !== 0) { - if (isNaN(v)) { - this.mem.setUint32(addr + 4, nanHead, true); - this.mem.setUint32(addr, 0, true); - return; - } - this.mem.setFloat64(addr, v, true); - return; - } - if (v === void 0) { - this.mem.setFloat64(addr, 0, true); - return; - } - let id = this._ids.get(v); - if (id === void 0) { - id = this._idPool.pop(); - if (id === void 0) { - id = this._values.length; - } - this._values[id] = v; - this._goRefCounts[id] = 0; - this._ids.set(v, id); - } - this._goRefCounts[id]++; - let typeFlag = 0; - switch (typeof v) { - case "object": - if (v !== null) { - typeFlag = 1; - } - break; - case "string": - typeFlag = 2; - break; - case "symbol": - typeFlag = 3; - break; - case "function": - typeFlag = 4; - break; - } - this.mem.setUint32(addr + 4, nanHead | typeFlag, true); - this.mem.setUint32(addr, id, true); - }; - const loadSlice = (addr) => { - const array = getInt64(addr + 0); - const len = getInt64(addr + 8); - return new Uint8Array(this._inst.exports.mem.buffer, array, len); - }; - const loadSliceOfValues = (addr) => { - const array = getInt64(addr + 0); - const len = getInt64(addr + 8); - const a = new Array(len); - for (let i = 0; i < len; i++) { - a[i] = loadValue(array + i * 8); - } - return a; - }; - const loadString = (addr) => { - const saddr = getInt64(addr + 0); - const len = getInt64(addr + 8); - return decoder.decode(new DataView(this._inst.exports.mem.buffer, saddr, len)); - }; - const timeOrigin = Date.now() - performance.now(); - this.importObject = { - go: { - // Go's SP does not change as long as no Go code is running. Some operations (e.g. calls, getters and setters) - // may synchronously trigger a Go event handler. This makes Go code get executed in the middle of the imported - // function. A goroutine can switch to a new stack if the current stack is too small (see morestack function). - // This changes the SP, thus we have to update the SP used by the imported function. - // func wasmExit(code int32) - "runtime.wasmExit": (sp) => { - sp >>>= 0; - const code = this.mem.getInt32(sp + 8, true); - this.exited = true; - delete this._inst; - delete this._values; - delete this._goRefCounts; - delete this._ids; - delete this._idPool; - this.exit(code); - }, - // func wasmWrite(fd uintptr, p unsafe.Pointer, n int32) - "runtime.wasmWrite": (sp) => { - sp >>>= 0; - const fd = getInt64(sp + 8); - const p = getInt64(sp + 16); - const n = this.mem.getInt32(sp + 24, true); - globalThis.fs.writeSync(fd, new Uint8Array(this._inst.exports.mem.buffer, p, n)); - }, - // func resetMemoryDataView() - "runtime.resetMemoryDataView": (sp) => { - sp >>>= 0; - this.mem = new DataView(this._inst.exports.mem.buffer); - }, - // func nanotime1() int64 - "runtime.nanotime1": (sp) => { - sp >>>= 0; - setInt64(sp + 8, (timeOrigin + performance.now()) * 1e6); - }, - // func walltime() (sec int64, nsec int32) - "runtime.walltime": (sp) => { - sp >>>= 0; - const msec = (/* @__PURE__ */ new Date()).getTime(); - setInt64(sp + 8, msec / 1e3); - this.mem.setInt32(sp + 16, msec % 1e3 * 1e6, true); - }, - // func scheduleTimeoutEvent(delay int64) int32 - "runtime.scheduleTimeoutEvent": (sp) => { - sp >>>= 0; - const id = this._nextCallbackTimeoutID; - this._nextCallbackTimeoutID++; - this._scheduledTimeouts.set(id, setTimeout( - () => { - this._resume(); - while (this._scheduledTimeouts.has(id)) { - console.warn("scheduleTimeoutEvent: missed timeout event"); - this._resume(); - } - }, - getInt64(sp + 8) + 1 - // setTimeout has been seen to fire up to 1 millisecond early - )); - this.mem.setInt32(sp + 16, id, true); - }, - // func clearTimeoutEvent(id int32) - "runtime.clearTimeoutEvent": (sp) => { - sp >>>= 0; - const id = this.mem.getInt32(sp + 8, true); - clearTimeout(this._scheduledTimeouts.get(id)); - this._scheduledTimeouts.delete(id); - }, - // func getRandomData(r []byte) - "runtime.getRandomData": (sp) => { - sp >>>= 0; - crypto.getRandomValues(loadSlice(sp + 8)); - }, - // func finalizeRef(v ref) - "syscall/js.finalizeRef": (sp) => { - sp >>>= 0; - const id = this.mem.getUint32(sp + 8, true); - this._goRefCounts[id]--; - if (this._goRefCounts[id] === 0) { - const v = this._values[id]; - this._values[id] = null; - this._ids.delete(v); - this._idPool.push(id); - } - }, - // func stringVal(value string) ref - "syscall/js.stringVal": (sp) => { - sp >>>= 0; - storeValue(sp + 24, loadString(sp + 8)); - }, - // func valueGet(v ref, p string) ref - "syscall/js.valueGet": (sp) => { - sp >>>= 0; - const result = Reflect.get(loadValue(sp + 8), loadString(sp + 16)); - sp = this._inst.exports.getsp() >>> 0; - storeValue(sp + 32, result); - }, - // func valueSet(v ref, p string, x ref) - "syscall/js.valueSet": (sp) => { - sp >>>= 0; - Reflect.set(loadValue(sp + 8), loadString(sp + 16), loadValue(sp + 32)); - }, - // func valueDelete(v ref, p string) - "syscall/js.valueDelete": (sp) => { - sp >>>= 0; - Reflect.deleteProperty(loadValue(sp + 8), loadString(sp + 16)); - }, - // func valueIndex(v ref, i int) ref - "syscall/js.valueIndex": (sp) => { - sp >>>= 0; - storeValue(sp + 24, Reflect.get(loadValue(sp + 8), getInt64(sp + 16))); - }, - // valueSetIndex(v ref, i int, x ref) - "syscall/js.valueSetIndex": (sp) => { - sp >>>= 0; - Reflect.set(loadValue(sp + 8), getInt64(sp + 16), loadValue(sp + 24)); - }, - // func valueCall(v ref, m string, args []ref) (ref, bool) - "syscall/js.valueCall": (sp) => { - sp >>>= 0; - try { - const v = loadValue(sp + 8); - const m = Reflect.get(v, loadString(sp + 16)); - const args = loadSliceOfValues(sp + 32); - const result = Reflect.apply(m, v, args); - sp = this._inst.exports.getsp() >>> 0; - storeValue(sp + 56, result); - this.mem.setUint8(sp + 64, 1); - } catch (err) { - sp = this._inst.exports.getsp() >>> 0; - storeValue(sp + 56, err); - this.mem.setUint8(sp + 64, 0); - } - }, - // func valueInvoke(v ref, args []ref) (ref, bool) - "syscall/js.valueInvoke": (sp) => { - sp >>>= 0; - try { - const v = loadValue(sp + 8); - const args = loadSliceOfValues(sp + 16); - const result = Reflect.apply(v, void 0, args); - sp = this._inst.exports.getsp() >>> 0; - storeValue(sp + 40, result); - this.mem.setUint8(sp + 48, 1); - } catch (err) { - sp = this._inst.exports.getsp() >>> 0; - storeValue(sp + 40, err); - this.mem.setUint8(sp + 48, 0); - } - }, - // func valueNew(v ref, args []ref) (ref, bool) - "syscall/js.valueNew": (sp) => { - sp >>>= 0; - try { - const v = loadValue(sp + 8); - const args = loadSliceOfValues(sp + 16); - const result = Reflect.construct(v, args); - sp = this._inst.exports.getsp() >>> 0; - storeValue(sp + 40, result); - this.mem.setUint8(sp + 48, 1); - } catch (err) { - sp = this._inst.exports.getsp() >>> 0; - storeValue(sp + 40, err); - this.mem.setUint8(sp + 48, 0); - } - }, - // func valueLength(v ref) int - "syscall/js.valueLength": (sp) => { - sp >>>= 0; - setInt64(sp + 16, parseInt(loadValue(sp + 8).length)); - }, - // valuePrepareString(v ref) (ref, int) - "syscall/js.valuePrepareString": (sp) => { - sp >>>= 0; - const str = encoder.encode(String(loadValue(sp + 8))); - storeValue(sp + 16, str); - setInt64(sp + 24, str.length); - }, - // valueLoadString(v ref, b []byte) - "syscall/js.valueLoadString": (sp) => { - sp >>>= 0; - const str = loadValue(sp + 8); - loadSlice(sp + 16).set(str); - }, - // func valueInstanceOf(v ref, t ref) bool - "syscall/js.valueInstanceOf": (sp) => { - sp >>>= 0; - this.mem.setUint8(sp + 24, loadValue(sp + 8) instanceof loadValue(sp + 16) ? 1 : 0); - }, - // func copyBytesToGo(dst []byte, src ref) (int, bool) - "syscall/js.copyBytesToGo": (sp) => { - sp >>>= 0; - const dst = loadSlice(sp + 8); - const src = loadValue(sp + 32); - if (!(src instanceof Uint8Array || src instanceof Uint8ClampedArray)) { - this.mem.setUint8(sp + 48, 0); - return; - } - const toCopy = src.subarray(0, dst.length); - dst.set(toCopy); - setInt64(sp + 40, toCopy.length); - this.mem.setUint8(sp + 48, 1); - }, - // func copyBytesToJS(dst ref, src []byte) (int, bool) - "syscall/js.copyBytesToJS": (sp) => { - sp >>>= 0; - const dst = loadValue(sp + 8); - const src = loadSlice(sp + 16); - if (!(dst instanceof Uint8Array || dst instanceof Uint8ClampedArray)) { - this.mem.setUint8(sp + 48, 0); - return; - } - const toCopy = src.subarray(0, dst.length); - dst.set(toCopy); - setInt64(sp + 40, toCopy.length); - this.mem.setUint8(sp + 48, 1); - }, - "debug": (value) => { - console.log(value); - } - } - }; - } - run(instance) { - return __async(this, null, function* () { - if (!(instance instanceof WebAssembly.Instance)) { - throw new Error("Go.run: WebAssembly.Instance expected"); - } - this._inst = instance; - this.mem = new DataView(this._inst.exports.mem.buffer); - this._values = [ - // JS values that Go currently has references to, indexed by reference id - NaN, - 0, - null, - true, - false, - globalThis, - this - ]; - this._goRefCounts = new Array(this._values.length).fill(Infinity); - this._ids = /* @__PURE__ */ new Map([ - // mapping from JS values to reference ids - [0, 1], - [null, 2], - [true, 3], - [false, 4], - [globalThis, 5], - [this, 6] - ]); - this._idPool = []; - this.exited = false; - let offset = 4096; - const strPtr = (str) => { - const ptr = offset; - const bytes = encoder.encode(str + "\0"); - new Uint8Array(this.mem.buffer, offset, bytes.length).set(bytes); - offset += bytes.length; - if (offset % 8 !== 0) { - offset += 8 - offset % 8; - } - return ptr; - }; - const argc = this.argv.length; - const argvPtrs = []; - this.argv.forEach((arg) => { - argvPtrs.push(strPtr(arg)); - }); - argvPtrs.push(0); - const keys = Object.keys(this.env).sort(); - keys.forEach((key) => { - argvPtrs.push(strPtr(`${key}=${this.env[key]}`)); - }); - argvPtrs.push(0); - const argv = offset; - argvPtrs.forEach((ptr) => { - this.mem.setUint32(offset, ptr, true); - this.mem.setUint32(offset + 4, 0, true); - offset += 8; - }); - const wasmMinDataAddr = 4096 + 8192; - if (offset >= wasmMinDataAddr) { - throw new Error("total length of command line and environment variables exceeds limit"); - } - this._inst.exports.run(argc, argv); - if (this.exited) { - this._resolveExitPromise(); - } - yield this._exitPromise; - }); - } - _resume() { - if (this.exited) { - throw new Error("Go program has already exited"); - } - this._inst.exports.resume(); - if (this.exited) { - this._resolveExitPromise(); - } - } - _makeFuncWrapper(id) { - const go = this; - return function() { - const event = { id, this: this, args: arguments }; - go._pendingEvent = event; - go._resume(); - return event.result; - }; - } - }; - })(); - onmessage = ({ data: wasm }) => { - let decoder = new TextDecoder(); - let fs = globalThis.fs; - let stderr = ""; - fs.writeSync = (fd, buffer) => { - if (fd === 1) { - postMessage(buffer); - } else if (fd === 2) { - stderr += decoder.decode(buffer); - let parts = stderr.split("\n"); - if (parts.length > 1) - console.log(parts.slice(0, -1).join("\n")); - stderr = parts[parts.length - 1]; - } else { - throw new Error("Bad write"); - } - return buffer.length; - }; - let stdin = []; - let resumeStdin; - let stdinPos = 0; - onmessage = ({ data }) => { - if (data.length > 0) { - stdin.push(data); - if (resumeStdin) - resumeStdin(); - } - }; - fs.read = (fd, buffer, offset, length, position, callback) => { - if (fd !== 0 || offset !== 0 || length !== buffer.length || position !== null) { - throw new Error("Bad read"); - } - if (stdin.length === 0) { - resumeStdin = () => fs.read(fd, buffer, offset, length, position, callback); - return; - } - let first = stdin[0]; - let count = Math.max(0, Math.min(length, first.length - stdinPos)); - buffer.set(first.subarray(stdinPos, stdinPos + count), offset); - stdinPos += count; - if (stdinPos === first.length) { - stdin.shift(); - stdinPos = 0; - } - callback(null, count); - }; - let go = new globalThis.Go(); - go.argv = ["", `--service=${"0.19.0"}`]; - tryToInstantiateModule(wasm, go).then( - (instance) => { - postMessage(null); - go.run(instance); - }, - (error) => { - postMessage(error); - } - ); - }; - function tryToInstantiateModule(wasm, go) { - return __async(this, null, function* () { - if (wasm instanceof WebAssembly.Module) { - return WebAssembly.instantiate(wasm, go.importObject); - } - const res = yield fetch(wasm); - if (!res.ok) - throw new Error(`Failed to download ${JSON.stringify(wasm)}`); - if ("instantiateStreaming" in WebAssembly && /^application\/wasm($|;)/i.test(res.headers.get("Content-Type") || "")) { - const result2 = yield WebAssembly.instantiateStreaming(res, go.importObject); - return result2.instance; - } - const bytes = yield res.arrayBuffer(); - const result = yield WebAssembly.instantiate(bytes, go.importObject); - return result.instance; - }); - } - return (m) => onmessage(m); - })((data) => worker.onmessage({ data })); - worker = { - onmessage: null, - postMessage: (data) => setTimeout(() => onmessage({ data })), - terminate() { - } - }; - } - let firstMessageResolve; - let firstMessageReject; - const firstMessagePromise = new Promise((resolve, reject) => { - firstMessageResolve = resolve; - firstMessageReject = reject; - }); - worker.onmessage = ({ data: error }) => { - worker.onmessage = ({ data }) => readFromStdout(data); - if (error) - firstMessageReject(error); - else - firstMessageResolve(); - }; - worker.postMessage(wasmModule || new URL(wasmURL, location.href).toString()); - let { readFromStdout, service } = createChannel({ - writeToStdin(bytes) { - worker.postMessage(bytes); - }, - isSync: false, - hasFS: false, - esbuild: browser_exports - }); - yield firstMessagePromise; - longLivedService = { - build: (options) => new Promise((resolve, reject) => service.buildOrContext({ - callName: "build", - refs: null, - options, - isTTY: false, - defaultWD: "/", - callback: (err, res) => err ? reject(err) : resolve(res) - })), - context: (options) => new Promise((resolve, reject) => service.buildOrContext({ - callName: "context", - refs: null, - options, - isTTY: false, - defaultWD: "/", - callback: (err, res) => err ? reject(err) : resolve(res) - })), - transform: (input, options) => new Promise((resolve, reject) => service.transform({ - callName: "transform", - refs: null, - input, - options: options || {}, - isTTY: false, - fs: { - readFile(_, callback) { - callback(new Error("Internal error"), null); - }, - writeFile(_, callback) { - callback(null); - } - }, - callback: (err, res) => err ? reject(err) : resolve(res) - })), - formatMessages: (messages, options) => new Promise((resolve, reject) => service.formatMessages({ - callName: "formatMessages", - refs: null, - messages, - options, - callback: (err, res) => err ? reject(err) : resolve(res) - })), - analyzeMetafile: (metafile, options) => new Promise((resolve, reject) => service.analyzeMetafile({ - callName: "analyzeMetafile", - refs: null, - metafile: typeof metafile === "string" ? metafile : JSON.stringify(metafile), - options, - callback: (err, res) => err ? reject(err) : resolve(res) - })) - }; -}); -var browser_default = browser_exports; -})(typeof module==="object"?module:{set exports(x){(typeof self!=="undefined"?self:this).esbuild=x}}); diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-37584f50.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-37584f50.js deleted file mode 100644 index f29e2aa4467acec4527e740c50077d6745c6afed..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-37584f50.js +++ /dev/null @@ -1,14 +0,0 @@ -import{m as Kn,_ as Nt}from"./index-0526d562.js";function _e(){}const du=e=>e;function $n(e){return e()}function ei(e){e.forEach($n)}function ti(e){return typeof e=="function"}function ri(e,t){return e!=e?t==t:e!==t||e&&typeof e=="object"||typeof e=="function"}function xr(e,...t){if(e==null){for(const o of t)o(void 0);return _e}const r=e.subscribe(...t);return r.unsubscribe?()=>r.unsubscribe():r}function mu(e){let t;return xr(e,r=>t=r)(),t}function pu(e){const t=typeof e=="string"&&e.match(/^\s*(-?[\d.]+)([^\s]*)\s*$/);return t?[parseFloat(t[1]),t[2]||"px"]:[e,"px"]}const Hr=typeof window<"u";let Lt=Hr?()=>window.performance.now():()=>Date.now(),Or=Hr?e=>requestAnimationFrame(e):_e;const pe=new Set;function Pr(e){pe.forEach(t=>{t.c(e)||(pe.delete(t),t.f())}),pe.size!==0&&Or(Pr)}function oi(e){let t;return pe.size===0&&Or(Pr),{promise:new Promise(r=>{pe.add(t={c:e,f:r})}),abort(){pe.delete(t)}}}const he=[];function ni(e,t){return{subscribe:Te(e,t).subscribe}}function Te(e,t=_e){let r;const o=new Set;function n(l){if(ri(e,l)&&(e=l,r)){const c=!he.length;for(const s of o)s[1](),he.push(s,e);if(c){for(let s=0;s{o.delete(s),o.size===0&&r&&(r(),r=null)}}return{set:n,update:i,subscribe:a}}function xe(e,t,r){const o=!Array.isArray(e),n=o?[e]:e;if(!n.every(Boolean))throw new Error("derived() expects stores as input, got a falsy value");const i=t.length<2;return ni(r,(a,l)=>{let c=!1;const s=[];let u=0,f=_e;const _=()=>{if(u)return;f();const p=t(o?s[0]:s,a,l);i?a(p):f=ti(p)?p:_e},m=n.map((p,g)=>xr(p,w=>{s[g]=w,u&=~(1<{u|=1<0}),r=[],o=0,n=t;o1)throw new RangeError("integer-width stems only accept a single optional option");n.options[0].replace(Bi,function(c,s,u,f,_,m){if(s)t.minimumIntegerDigits=u.length;else{if(f&&_)throw new Error("We currently do not support maximum integer digits");if(m)throw new Error("We currently do not support exact integer digits")}return""});continue}if(Dr.test(n.stem)){t.minimumIntegerDigits=n.stem.length;continue}if(jt.test(n.stem)){if(n.options.length>1)throw new RangeError("Fraction-precision stems only accept a single optional option");n.stem.replace(jt,function(c,s,u,f,_,m){return u==="*"?t.minimumFractionDigits=s.length:f&&f[0]==="#"?t.maximumFractionDigits=f.length:_&&m?(t.minimumFractionDigits=_.length,t.maximumFractionDigits=_.length+m.length):(t.minimumFractionDigits=s.length,t.maximumFractionDigits=s.length),""});var i=n.options[0];i==="w"?t=O(O({},t),{trailingZeroDisplay:"stripIfInteger"}):i&&(t=O(O({},t),Dt(i)));continue}if(jr.test(n.stem)){t=O(O({},t),Dt(n.stem));continue}var a=Gr(n.stem);a&&(t=O(O({},t),a));var l=Ai(n.stem);l&&(t=O(O({},t),l))}return t}var Ve={"001":["H","h"],AC:["H","h","hb","hB"],AD:["H","hB"],AE:["h","hB","hb","H"],AF:["H","hb","hB","h"],AG:["h","hb","H","hB"],AI:["H","h","hb","hB"],AL:["h","H","hB"],AM:["H","hB"],AO:["H","hB"],AR:["H","h","hB","hb"],AS:["h","H"],AT:["H","hB"],AU:["h","hb","H","hB"],AW:["H","hB"],AX:["H"],AZ:["H","hB","h"],BA:["H","hB","h"],BB:["h","hb","H","hB"],BD:["h","hB","H"],BE:["H","hB"],BF:["H","hB"],BG:["H","hB","h"],BH:["h","hB","hb","H"],BI:["H","h"],BJ:["H","hB"],BL:["H","hB"],BM:["h","hb","H","hB"],BN:["hb","hB","h","H"],BO:["H","hB","h","hb"],BQ:["H"],BR:["H","hB"],BS:["h","hb","H","hB"],BT:["h","H"],BW:["H","h","hb","hB"],BY:["H","h"],BZ:["H","h","hb","hB"],CA:["h","hb","H","hB"],CC:["H","h","hb","hB"],CD:["hB","H"],CF:["H","h","hB"],CG:["H","hB"],CH:["H","hB","h"],CI:["H","hB"],CK:["H","h","hb","hB"],CL:["H","h","hB","hb"],CM:["H","h","hB"],CN:["H","hB","hb","h"],CO:["h","H","hB","hb"],CP:["H"],CR:["H","h","hB","hb"],CU:["H","h","hB","hb"],CV:["H","hB"],CW:["H","hB"],CX:["H","h","hb","hB"],CY:["h","H","hb","hB"],CZ:["H"],DE:["H","hB"],DG:["H","h","hb","hB"],DJ:["h","H"],DK:["H"],DM:["h","hb","H","hB"],DO:["h","H","hB","hb"],DZ:["h","hB","hb","H"],EA:["H","h","hB","hb"],EC:["H","hB","h","hb"],EE:["H","hB"],EG:["h","hB","hb","H"],EH:["h","hB","hb","H"],ER:["h","H"],ES:["H","hB","h","hb"],ET:["hB","hb","h","H"],FI:["H"],FJ:["h","hb","H","hB"],FK:["H","h","hb","hB"],FM:["h","hb","H","hB"],FO:["H","h"],FR:["H","hB"],GA:["H","hB"],GB:["H","h","hb","hB"],GD:["h","hb","H","hB"],GE:["H","hB","h"],GF:["H","hB"],GG:["H","h","hb","hB"],GH:["h","H"],GI:["H","h","hb","hB"],GL:["H","h"],GM:["h","hb","H","hB"],GN:["H","hB"],GP:["H","hB"],GQ:["H","hB","h","hb"],GR:["h","H","hb","hB"],GT:["H","h","hB","hb"],GU:["h","hb","H","hB"],GW:["H","hB"],GY:["h","hb","H","hB"],HK:["h","hB","hb","H"],HN:["H","h","hB","hb"],HR:["H","hB"],HU:["H","h"],IC:["H","h","hB","hb"],ID:["H"],IE:["H","h","hb","hB"],IL:["H","hB"],IM:["H","h","hb","hB"],IN:["h","H"],IO:["H","h","hb","hB"],IQ:["h","hB","hb","H"],IR:["hB","H"],IS:["H"],IT:["H","hB"],JE:["H","h","hb","hB"],JM:["h","hb","H","hB"],JO:["h","hB","hb","H"],JP:["H","K","h"],KE:["hB","hb","H","h"],KG:["H","h","hB","hb"],KH:["hB","h","H","hb"],KI:["h","hb","H","hB"],KM:["H","h","hB","hb"],KN:["h","hb","H","hB"],KP:["h","H","hB","hb"],KR:["h","H","hB","hb"],KW:["h","hB","hb","H"],KY:["h","hb","H","hB"],KZ:["H","hB"],LA:["H","hb","hB","h"],LB:["h","hB","hb","H"],LC:["h","hb","H","hB"],LI:["H","hB","h"],LK:["H","h","hB","hb"],LR:["h","hb","H","hB"],LS:["h","H"],LT:["H","h","hb","hB"],LU:["H","h","hB"],LV:["H","hB","hb","h"],LY:["h","hB","hb","H"],MA:["H","h","hB","hb"],MC:["H","hB"],MD:["H","hB"],ME:["H","hB","h"],MF:["H","hB"],MG:["H","h"],MH:["h","hb","H","hB"],MK:["H","h","hb","hB"],ML:["H"],MM:["hB","hb","H","h"],MN:["H","h","hb","hB"],MO:["h","hB","hb","H"],MP:["h","hb","H","hB"],MQ:["H","hB"],MR:["h","hB","hb","H"],MS:["H","h","hb","hB"],MT:["H","h"],MU:["H","h"],MV:["H","h"],MW:["h","hb","H","hB"],MX:["H","h","hB","hb"],MY:["hb","hB","h","H"],MZ:["H","hB"],NA:["h","H","hB","hb"],NC:["H","hB"],NE:["H"],NF:["H","h","hb","hB"],NG:["H","h","hb","hB"],NI:["H","h","hB","hb"],NL:["H","hB"],NO:["H","h"],NP:["H","h","hB"],NR:["H","h","hb","hB"],NU:["H","h","hb","hB"],NZ:["h","hb","H","hB"],OM:["h","hB","hb","H"],PA:["h","H","hB","hb"],PE:["H","hB","h","hb"],PF:["H","h","hB"],PG:["h","H"],PH:["h","hB","hb","H"],PK:["h","hB","H"],PL:["H","h"],PM:["H","hB"],PN:["H","h","hb","hB"],PR:["h","H","hB","hb"],PS:["h","hB","hb","H"],PT:["H","hB"],PW:["h","H"],PY:["H","h","hB","hb"],QA:["h","hB","hb","H"],RE:["H","hB"],RO:["H","hB"],RS:["H","hB","h"],RU:["H"],RW:["H","h"],SA:["h","hB","hb","H"],SB:["h","hb","H","hB"],SC:["H","h","hB"],SD:["h","hB","hb","H"],SE:["H"],SG:["h","hb","H","hB"],SH:["H","h","hb","hB"],SI:["H","hB"],SJ:["H"],SK:["H"],SL:["h","hb","H","hB"],SM:["H","h","hB"],SN:["H","h","hB"],SO:["h","H"],SR:["H","hB"],SS:["h","hb","H","hB"],ST:["H","hB"],SV:["H","h","hB","hb"],SX:["H","h","hb","hB"],SY:["h","hB","hb","H"],SZ:["h","hb","H","hB"],TA:["H","h","hb","hB"],TC:["h","hb","H","hB"],TD:["h","H","hB"],TF:["H","h","hB"],TG:["H","hB"],TH:["H","h"],TJ:["H","h"],TL:["H","hB","hb","h"],TM:["H","h"],TN:["h","hB","hb","H"],TO:["h","H"],TR:["H","hB"],TT:["h","hb","H","hB"],TW:["hB","hb","h","H"],TZ:["hB","hb","H","h"],UA:["H","hB","h"],UG:["hB","hb","H","h"],UM:["h","hb","H","hB"],US:["h","hb","H","hB"],UY:["H","h","hB","hb"],UZ:["H","hB","h"],VA:["H","h","hB"],VC:["h","hb","H","hB"],VE:["h","H","hB","hb"],VG:["h","hb","H","hB"],VI:["h","hb","H","hB"],VN:["H","h"],VU:["h","H"],WF:["H","hB"],WS:["h","H"],XK:["H","hB","h"],YE:["h","hB","hb","H"],YT:["H","hB"],ZA:["H","h","hb","hB"],ZM:["h","hb","H","hB"],ZW:["H","h"],"af-ZA":["H","h","hB","hb"],"ar-001":["h","hB","hb","H"],"ca-ES":["H","h","hB"],"en-001":["h","hb","H","hB"],"es-BO":["H","h","hB","hb"],"es-BR":["H","h","hB","hb"],"es-EC":["H","h","hB","hb"],"es-ES":["H","h","hB","hb"],"es-GQ":["H","h","hB","hb"],"es-PE":["H","h","hB","hb"],"fr-CA":["H","h","hB"],"gl-ES":["H","h","hB"],"gu-IN":["hB","hb","h","H"],"hi-IN":["hB","h","H"],"it-CH":["H","h","hB"],"it-IT":["H","h","hB"],"kn-IN":["hB","h","H"],"ml-IN":["hB","h","H"],"mr-IN":["hB","hb","h","H"],"pa-IN":["hB","hb","h","H"],"ta-IN":["hB","h","hb","H"],"te-IN":["hB","h","H"],"zu-ZA":["H","hB","hb","h"]};function Ii(e,t){for(var r="",o=0;o>1),c="a",s=Ni(t);for((s=="H"||s=="k")&&(l=0);l-- >0;)r+=c;for(;a-- >0;)r=s+r}else n==="J"?r+="H":r+=n}return r}function Ni(e){var t=e.hourCycle;if(t===void 0&&e.hourCycles&&e.hourCycles.length&&(t=e.hourCycles[0]),t)switch(t){case"h24":return"k";case"h23":return"H";case"h12":return"h";case"h11":return"K";default:throw new Error("Invalid hourCycle")}var r=e.language,o;r!=="root"&&(o=e.maximize().region);var n=Ve[o||""]||Ve[r||""]||Ve["".concat(r,"-001")]||Ve["001"];return n[0]}var lt,Li=new RegExp("^".concat(Rr.source,"*")),Mi=new RegExp("".concat(Rr.source,"*$"));function x(e,t){return{start:e,end:t}}var Ri=!!String.prototype.startsWith&&"_a".startsWith("a",1),ji=!!String.fromCodePoint,Di=!!Object.fromEntries,Gi=!!String.prototype.codePointAt,Ui=!!String.prototype.trimStart,Fi=!!String.prototype.trimEnd,Vi=!!Number.isSafeInteger,qi=Vi?Number.isSafeInteger:function(e){return typeof e=="number"&&isFinite(e)&&Math.floor(e)===e&&Math.abs(e)<=9007199254740991},gt=!0;try{var zi=Fr("([^\\p{White_Space}\\p{Pattern_Syntax}]*)","yu");gt=((lt=zi.exec("a"))===null||lt===void 0?void 0:lt[0])==="a"}catch{gt=!1}var Ut=Ri?function(t,r,o){return t.startsWith(r,o)}:function(t,r,o){return t.slice(o,o+r.length)===r},bt=ji?String.fromCodePoint:function(){for(var t=[],r=0;ri;){if(a=t[i++],a>1114111)throw RangeError(a+" is not a valid code point");o+=a<65536?String.fromCharCode(a):String.fromCharCode(((a-=65536)>>10)+55296,a%1024+56320)}return o},Ft=Di?Object.fromEntries:function(t){for(var r={},o=0,n=t;o=o)){var n=t.charCodeAt(r),i;return n<55296||n>56319||r+1===o||(i=t.charCodeAt(r+1))<56320||i>57343?n:(n-55296<<10)+(i-56320)+65536}},Xi=Ui?function(t){return t.trimStart()}:function(t){return t.replace(Li,"")},Wi=Fi?function(t){return t.trimEnd()}:function(t){return t.replace(Mi,"")};function Fr(e,t){return new RegExp(e,t)}var vt;if(gt){var Vt=Fr("([^\\p{White_Space}\\p{Pattern_Syntax}]*)","yu");vt=function(t,r){var o;Vt.lastIndex=r;var n=Vt.exec(t);return(o=n[1])!==null&&o!==void 0?o:""}}else vt=function(t,r){for(var o=[];;){var n=Ur(t,r);if(n===void 0||Vr(n)||Qi(n))break;o.push(n),r+=n>=65536?2:1}return bt.apply(void 0,o)};var Zi=function(){function e(t,r){r===void 0&&(r={}),this.message=t,this.position={offset:0,line:1,column:1},this.ignoreTag=!!r.ignoreTag,this.locale=r.locale,this.requiresOtherClause=!!r.requiresOtherClause,this.shouldParseSkeletons=!!r.shouldParseSkeletons}return e.prototype.parse=function(){if(this.offset()!==0)throw Error("parser can only be used once");return this.parseMessage(0,"",!1)},e.prototype.parseMessage=function(t,r,o){for(var n=[];!this.isEOF();){var i=this.char();if(i===123){var a=this.parseArgument(t,o);if(a.err)return a;n.push(a.val)}else{if(i===125&&t>0)break;if(i===35&&(r==="plural"||r==="selectordinal")){var l=this.clonePosition();this.bump(),n.push({type:k.pound,location:x(l,this.clonePosition())})}else if(i===60&&!this.ignoreTag&&this.peek()===47){if(o)break;return this.error(T.UNMATCHED_CLOSING_TAG,x(this.clonePosition(),this.clonePosition()))}else if(i===60&&!this.ignoreTag&&yt(this.peek()||0)){var a=this.parseTag(t,r);if(a.err)return a;n.push(a.val)}else{var a=this.parseLiteral(t,r);if(a.err)return a;n.push(a.val)}}}return{val:n,err:null}},e.prototype.parseTag=function(t,r){var o=this.clonePosition();this.bump();var n=this.parseTagName();if(this.bumpSpace(),this.bumpIf("/>"))return{val:{type:k.literal,value:"<".concat(n,"/>"),location:x(o,this.clonePosition())},err:null};if(this.bumpIf(">")){var i=this.parseMessage(t+1,r,!0);if(i.err)return i;var a=i.val,l=this.clonePosition();if(this.bumpIf("")?{val:{type:k.tag,value:n,children:a,location:x(o,this.clonePosition())},err:null}:this.error(T.INVALID_TAG,x(l,this.clonePosition())))}else return this.error(T.UNCLOSED_TAG,x(o,this.clonePosition()))}else return this.error(T.INVALID_TAG,x(o,this.clonePosition()))},e.prototype.parseTagName=function(){var t=this.offset();for(this.bump();!this.isEOF()&&Ji(this.char());)this.bump();return this.message.slice(t,this.offset())},e.prototype.parseLiteral=function(t,r){for(var o=this.clonePosition(),n="";;){var i=this.tryParseQuote(r);if(i){n+=i;continue}var a=this.tryParseUnquoted(t,r);if(a){n+=a;continue}var l=this.tryParseLeftAngleBracket();if(l){n+=l;continue}break}var c=x(o,this.clonePosition());return{val:{type:k.literal,value:n,location:c},err:null}},e.prototype.tryParseLeftAngleBracket=function(){return!this.isEOF()&&this.char()===60&&(this.ignoreTag||!Yi(this.peek()||0))?(this.bump(),"<"):null},e.prototype.tryParseQuote=function(t){if(this.isEOF()||this.char()!==39)return null;switch(this.peek()){case 39:return this.bump(),this.bump(),"'";case 123:case 60:case 62:case 125:break;case 35:if(t==="plural"||t==="selectordinal")break;return null;default:return null}this.bump();var r=[this.char()];for(this.bump();!this.isEOF();){var o=this.char();if(o===39)if(this.peek()===39)r.push(39),this.bump();else{this.bump();break}else r.push(o);this.bump()}return bt.apply(void 0,r)},e.prototype.tryParseUnquoted=function(t,r){if(this.isEOF())return null;var o=this.char();return o===60||o===123||o===35&&(r==="plural"||r==="selectordinal")||o===125&&t>0?null:(this.bump(),bt(o))},e.prototype.parseArgument=function(t,r){var o=this.clonePosition();if(this.bump(),this.bumpSpace(),this.isEOF())return this.error(T.EXPECT_ARGUMENT_CLOSING_BRACE,x(o,this.clonePosition()));if(this.char()===125)return this.bump(),this.error(T.EMPTY_ARGUMENT,x(o,this.clonePosition()));var n=this.parseIdentifierIfPossible().value;if(!n)return this.error(T.MALFORMED_ARGUMENT,x(o,this.clonePosition()));if(this.bumpSpace(),this.isEOF())return this.error(T.EXPECT_ARGUMENT_CLOSING_BRACE,x(o,this.clonePosition()));switch(this.char()){case 125:return this.bump(),{val:{type:k.argument,value:n,location:x(o,this.clonePosition())},err:null};case 44:return this.bump(),this.bumpSpace(),this.isEOF()?this.error(T.EXPECT_ARGUMENT_CLOSING_BRACE,x(o,this.clonePosition())):this.parseArgumentOptions(t,r,n,o);default:return this.error(T.MALFORMED_ARGUMENT,x(o,this.clonePosition()))}},e.prototype.parseIdentifierIfPossible=function(){var t=this.clonePosition(),r=this.offset(),o=vt(this.message,r),n=r+o.length;this.bumpTo(n);var i=this.clonePosition(),a=x(t,i);return{value:o,location:a}},e.prototype.parseArgumentOptions=function(t,r,o,n){var i,a=this.clonePosition(),l=this.parseIdentifierIfPossible().value,c=this.clonePosition();switch(l){case"":return this.error(T.EXPECT_ARGUMENT_TYPE,x(a,c));case"number":case"date":case"time":{this.bumpSpace();var s=null;if(this.bumpIf(",")){this.bumpSpace();var u=this.clonePosition(),f=this.parseSimpleArgStyleIfPossible();if(f.err)return f;var _=Wi(f.val);if(_.length===0)return this.error(T.EXPECT_ARGUMENT_STYLE,x(this.clonePosition(),this.clonePosition()));var m=x(u,this.clonePosition());s={style:_,styleLocation:m}}var p=this.tryParseArgumentClose(n);if(p.err)return p;var g=x(n,this.clonePosition());if(s&&Ut(s?.style,"::",0)){var w=Xi(s.style.slice(2));if(l==="number"){var f=this.parseNumberSkeletonFromString(w,s.styleLocation);return f.err?f:{val:{type:k.number,value:o,location:g,style:f.val},err:null}}else{if(w.length===0)return this.error(T.EXPECT_DATE_TIME_SKELETON,g);var P=w;this.locale&&(P=Ii(w,this.locale));var _={type:ve.dateTime,pattern:P,location:s.styleLocation,parsedOptions:this.shouldParseSkeletons?Hi(P):{}},S=l==="date"?k.date:k.time;return{val:{type:S,value:o,location:g,style:_},err:null}}}return{val:{type:l==="number"?k.number:l==="date"?k.date:k.time,value:o,location:g,style:(i=s?.style)!==null&&i!==void 0?i:null},err:null}}case"plural":case"selectordinal":case"select":{var h=this.clonePosition();if(this.bumpSpace(),!this.bumpIf(","))return this.error(T.EXPECT_SELECT_ARGUMENT_OPTIONS,x(h,O({},h)));this.bumpSpace();var v=this.parseIdentifierIfPossible(),A=0;if(l!=="select"&&v.value==="offset"){if(!this.bumpIf(":"))return this.error(T.EXPECT_PLURAL_ARGUMENT_OFFSET_VALUE,x(this.clonePosition(),this.clonePosition()));this.bumpSpace();var f=this.tryParseDecimalInteger(T.EXPECT_PLURAL_ARGUMENT_OFFSET_VALUE,T.INVALID_PLURAL_ARGUMENT_OFFSET_VALUE);if(f.err)return f;this.bumpSpace(),v=this.parseIdentifierIfPossible(),A=f.val}var G=this.tryParsePluralOrSelectOptions(t,l,r,v);if(G.err)return G;var p=this.tryParseArgumentClose(n);if(p.err)return p;var z=x(n,this.clonePosition());return l==="select"?{val:{type:k.select,value:o,options:Ft(G.val),location:z},err:null}:{val:{type:k.plural,value:o,options:Ft(G.val),offset:A,pluralType:l==="plural"?"cardinal":"ordinal",location:z},err:null}}default:return this.error(T.INVALID_ARGUMENT_TYPE,x(a,c))}},e.prototype.tryParseArgumentClose=function(t){return this.isEOF()||this.char()!==125?this.error(T.EXPECT_ARGUMENT_CLOSING_BRACE,x(t,this.clonePosition())):(this.bump(),{val:!0,err:null})},e.prototype.parseSimpleArgStyleIfPossible=function(){for(var t=0,r=this.clonePosition();!this.isEOF();){var o=this.char();switch(o){case 39:{this.bump();var n=this.clonePosition();if(!this.bumpUntil("'"))return this.error(T.UNCLOSED_QUOTE_IN_ARGUMENT_STYLE,x(n,this.clonePosition()));this.bump();break}case 123:{t+=1,this.bump();break}case 125:{if(t>0)t-=1;else return{val:this.message.slice(r.offset,this.offset()),err:null};break}default:this.bump();break}}return{val:this.message.slice(r.offset,this.offset()),err:null}},e.prototype.parseNumberSkeletonFromString=function(t,r){var o=[];try{o=Pi(t)}catch{return this.error(T.INVALID_NUMBER_SKELETON,r)}return{val:{type:ve.number,tokens:o,location:r,parsedOptions:this.shouldParseSkeletons?Ci(o):{}},err:null}},e.prototype.tryParsePluralOrSelectOptions=function(t,r,o,n){for(var i,a=!1,l=[],c=new Set,s=n.value,u=n.location;;){if(s.length===0){var f=this.clonePosition();if(r!=="select"&&this.bumpIf("=")){var _=this.tryParseDecimalInteger(T.EXPECT_PLURAL_ARGUMENT_SELECTOR,T.INVALID_PLURAL_ARGUMENT_SELECTOR);if(_.err)return _;u=x(f,this.clonePosition()),s=this.message.slice(f.offset,this.offset())}else break}if(c.has(s))return this.error(r==="select"?T.DUPLICATE_SELECT_ARGUMENT_SELECTOR:T.DUPLICATE_PLURAL_ARGUMENT_SELECTOR,u);s==="other"&&(a=!0),this.bumpSpace();var m=this.clonePosition();if(!this.bumpIf("{"))return this.error(r==="select"?T.EXPECT_SELECT_ARGUMENT_SELECTOR_FRAGMENT:T.EXPECT_PLURAL_ARGUMENT_SELECTOR_FRAGMENT,x(this.clonePosition(),this.clonePosition()));var p=this.parseMessage(t+1,r,o);if(p.err)return p;var g=this.tryParseArgumentClose(m);if(g.err)return g;l.push([s,{value:p.val,location:x(m,this.clonePosition())}]),c.add(s),this.bumpSpace(),i=this.parseIdentifierIfPossible(),s=i.value,u=i.location}return l.length===0?this.error(r==="select"?T.EXPECT_SELECT_ARGUMENT_SELECTOR:T.EXPECT_PLURAL_ARGUMENT_SELECTOR,x(this.clonePosition(),this.clonePosition())):this.requiresOtherClause&&!a?this.error(T.MISSING_OTHER_CLAUSE,x(this.clonePosition(),this.clonePosition())):{val:l,err:null}},e.prototype.tryParseDecimalInteger=function(t,r){var o=1,n=this.clonePosition();this.bumpIf("+")||this.bumpIf("-")&&(o=-1);for(var i=!1,a=0;!this.isEOF();){var l=this.char();if(l>=48&&l<=57)i=!0,a=a*10+(l-48),this.bump();else break}var c=x(n,this.clonePosition());return i?(a*=o,qi(a)?{val:a,err:null}:this.error(r,c)):this.error(t,c)},e.prototype.offset=function(){return this.position.offset},e.prototype.isEOF=function(){return this.offset()===this.message.length},e.prototype.clonePosition=function(){return{offset:this.position.offset,line:this.position.line,column:this.position.column}},e.prototype.char=function(){var t=this.position.offset;if(t>=this.message.length)throw Error("out of bound");var r=Ur(this.message,t);if(r===void 0)throw Error("Offset ".concat(t," is at invalid UTF-16 code unit boundary"));return r},e.prototype.error=function(t,r){return{val:null,err:{kind:t,message:this.message,location:r}}},e.prototype.bump=function(){if(!this.isEOF()){var t=this.char();t===10?(this.position.line+=1,this.position.column=1,this.position.offset+=1):(this.position.column+=1,this.position.offset+=t<65536?1:2)}},e.prototype.bumpIf=function(t){if(Ut(this.message,t,this.offset())){for(var r=0;r=0?(this.bumpTo(o),!0):(this.bumpTo(this.message.length),!1)},e.prototype.bumpTo=function(t){if(this.offset()>t)throw Error("targetOffset ".concat(t," must be greater than or equal to the current offset ").concat(this.offset()));for(t=Math.min(t,this.message.length);;){var r=this.offset();if(r===t)break;if(r>t)throw Error("targetOffset ".concat(t," is at invalid UTF-16 code unit boundary"));if(this.bump(),this.isEOF())break}},e.prototype.bumpSpace=function(){for(;!this.isEOF()&&Vr(this.char());)this.bump()},e.prototype.peek=function(){if(this.isEOF())return null;var t=this.char(),r=this.offset(),o=this.message.charCodeAt(r+(t>=65536?2:1));return o??null},e}();function yt(e){return e>=97&&e<=122||e>=65&&e<=90}function Yi(e){return yt(e)||e===47}function Ji(e){return e===45||e===46||e>=48&&e<=57||e===95||e>=97&&e<=122||e>=65&&e<=90||e==183||e>=192&&e<=214||e>=216&&e<=246||e>=248&&e<=893||e>=895&&e<=8191||e>=8204&&e<=8205||e>=8255&&e<=8256||e>=8304&&e<=8591||e>=11264&&e<=12271||e>=12289&&e<=55295||e>=63744&&e<=64975||e>=65008&&e<=65533||e>=65536&&e<=983039}function Vr(e){return e>=9&&e<=13||e===32||e===133||e>=8206&&e<=8207||e===8232||e===8233}function Qi(e){return e>=33&&e<=35||e===36||e>=37&&e<=39||e===40||e===41||e===42||e===43||e===44||e===45||e>=46&&e<=47||e>=58&&e<=59||e>=60&&e<=62||e>=63&&e<=64||e===91||e===92||e===93||e===94||e===96||e===123||e===124||e===125||e===126||e===161||e>=162&&e<=165||e===166||e===167||e===169||e===171||e===172||e===174||e===176||e===177||e===182||e===187||e===191||e===215||e===247||e>=8208&&e<=8213||e>=8214&&e<=8215||e===8216||e===8217||e===8218||e>=8219&&e<=8220||e===8221||e===8222||e===8223||e>=8224&&e<=8231||e>=8240&&e<=8248||e===8249||e===8250||e>=8251&&e<=8254||e>=8257&&e<=8259||e===8260||e===8261||e===8262||e>=8263&&e<=8273||e===8274||e===8275||e>=8277&&e<=8286||e>=8592&&e<=8596||e>=8597&&e<=8601||e>=8602&&e<=8603||e>=8604&&e<=8607||e===8608||e>=8609&&e<=8610||e===8611||e>=8612&&e<=8613||e===8614||e>=8615&&e<=8621||e===8622||e>=8623&&e<=8653||e>=8654&&e<=8655||e>=8656&&e<=8657||e===8658||e===8659||e===8660||e>=8661&&e<=8691||e>=8692&&e<=8959||e>=8960&&e<=8967||e===8968||e===8969||e===8970||e===8971||e>=8972&&e<=8991||e>=8992&&e<=8993||e>=8994&&e<=9e3||e===9001||e===9002||e>=9003&&e<=9083||e===9084||e>=9085&&e<=9114||e>=9115&&e<=9139||e>=9140&&e<=9179||e>=9180&&e<=9185||e>=9186&&e<=9254||e>=9255&&e<=9279||e>=9280&&e<=9290||e>=9291&&e<=9311||e>=9472&&e<=9654||e===9655||e>=9656&&e<=9664||e===9665||e>=9666&&e<=9719||e>=9720&&e<=9727||e>=9728&&e<=9838||e===9839||e>=9840&&e<=10087||e===10088||e===10089||e===10090||e===10091||e===10092||e===10093||e===10094||e===10095||e===10096||e===10097||e===10098||e===10099||e===10100||e===10101||e>=10132&&e<=10175||e>=10176&&e<=10180||e===10181||e===10182||e>=10183&&e<=10213||e===10214||e===10215||e===10216||e===10217||e===10218||e===10219||e===10220||e===10221||e===10222||e===10223||e>=10224&&e<=10239||e>=10240&&e<=10495||e>=10496&&e<=10626||e===10627||e===10628||e===10629||e===10630||e===10631||e===10632||e===10633||e===10634||e===10635||e===10636||e===10637||e===10638||e===10639||e===10640||e===10641||e===10642||e===10643||e===10644||e===10645||e===10646||e===10647||e===10648||e>=10649&&e<=10711||e===10712||e===10713||e===10714||e===10715||e>=10716&&e<=10747||e===10748||e===10749||e>=10750&&e<=11007||e>=11008&&e<=11055||e>=11056&&e<=11076||e>=11077&&e<=11078||e>=11079&&e<=11084||e>=11085&&e<=11123||e>=11124&&e<=11125||e>=11126&&e<=11157||e===11158||e>=11159&&e<=11263||e>=11776&&e<=11777||e===11778||e===11779||e===11780||e===11781||e>=11782&&e<=11784||e===11785||e===11786||e===11787||e===11788||e===11789||e>=11790&&e<=11798||e===11799||e>=11800&&e<=11801||e===11802||e===11803||e===11804||e===11805||e>=11806&&e<=11807||e===11808||e===11809||e===11810||e===11811||e===11812||e===11813||e===11814||e===11815||e===11816||e===11817||e>=11818&&e<=11822||e===11823||e>=11824&&e<=11833||e>=11834&&e<=11835||e>=11836&&e<=11839||e===11840||e===11841||e===11842||e>=11843&&e<=11855||e>=11856&&e<=11857||e===11858||e>=11859&&e<=11903||e>=12289&&e<=12291||e===12296||e===12297||e===12298||e===12299||e===12300||e===12301||e===12302||e===12303||e===12304||e===12305||e>=12306&&e<=12307||e===12308||e===12309||e===12310||e===12311||e===12312||e===12313||e===12314||e===12315||e===12316||e===12317||e>=12318&&e<=12319||e===12320||e===12336||e===64830||e===64831||e>=65093&&e<=65094}function Et(e){e.forEach(function(t){if(delete t.location,Ir(t)||Nr(t))for(var r in t.options)delete t.options[r].location,Et(t.options[r].value);else Br(t)&&Mr(t.style)||(Ar(t)||Cr(t))&&pt(t.style)?delete t.style.location:Lr(t)&&Et(t.children)})}function Ki(e,t){t===void 0&&(t={}),t=O({shouldParseSkeletons:!0,requiresOtherClause:!0},t);var r=new Zi(e,t).parse();if(r.err){var o=SyntaxError(T[r.err.kind]);throw o.location=r.err.location,o.originalMessage=r.err.message,o}return t?.captureLocation||Et(r.val),r.val}function ut(e,t){var r=t&&t.cache?t.cache:na,o=t&&t.serializer?t.serializer:oa,n=t&&t.strategy?t.strategy:ea;return n(e,{cache:r,serializer:o})}function $i(e){return e==null||typeof e=="number"||typeof e=="boolean"}function qr(e,t,r,o){var n=$i(o)?o:r(o),i=t.get(n);return typeof i>"u"&&(i=e.call(this,o),t.set(n,i)),i}function zr(e,t,r){var o=Array.prototype.slice.call(arguments,3),n=r(o),i=t.get(n);return typeof i>"u"&&(i=e.apply(this,o),t.set(n,i)),i}function Pt(e,t,r,o,n){return r.bind(t,e,o,n)}function ea(e,t){var r=e.length===1?qr:zr;return Pt(e,this,r,t.cache.create(),t.serializer)}function ta(e,t){return Pt(e,this,zr,t.cache.create(),t.serializer)}function ra(e,t){return Pt(e,this,qr,t.cache.create(),t.serializer)}var oa=function(){return JSON.stringify(arguments)};function kt(){this.cache=Object.create(null)}kt.prototype.get=function(e){return this.cache[e]};kt.prototype.set=function(e,t){this.cache[e]=t};var na={create:function(){return new kt}},ct={variadic:ta,monadic:ra},ye;(function(e){e.MISSING_VALUE="MISSING_VALUE",e.INVALID_VALUE="INVALID_VALUE",e.MISSING_INTL_API="MISSING_INTL_API"})(ye||(ye={}));var tt=function(e){et(t,e);function t(r,o,n){var i=e.call(this,r)||this;return i.code=o,i.originalMessage=n,i}return t.prototype.toString=function(){return"[formatjs Error: ".concat(this.code,"] ").concat(this.message)},t}(Error),qt=function(e){et(t,e);function t(r,o,n,i){return e.call(this,'Invalid values for "'.concat(r,'": "').concat(o,'". Options are "').concat(Object.keys(n).join('", "'),'"'),ye.INVALID_VALUE,i)||this}return t}(tt),ia=function(e){et(t,e);function t(r,o,n){return e.call(this,'Value for "'.concat(r,'" must be of type ').concat(o),ye.INVALID_VALUE,n)||this}return t}(tt),aa=function(e){et(t,e);function t(r,o){return e.call(this,'The intl string context variable "'.concat(r,'" was not provided to the string "').concat(o,'"'),ye.MISSING_VALUE,o)||this}return t}(tt),M;(function(e){e[e.literal=0]="literal",e[e.object=1]="object"})(M||(M={}));function sa(e){return e.length<2?e:e.reduce(function(t,r){var o=t[t.length-1];return!o||o.type!==M.literal||r.type!==M.literal?t.push(r):o.value+=r.value,t},[])}function la(e){return typeof e=="function"}function Ze(e,t,r,o,n,i,a){if(e.length===1&&Rt(e[0]))return[{type:M.literal,value:e[0].value}];for(var l=[],c=0,s=e;c"u")){var r=Intl.NumberFormat.supportedLocalesOf(t);return r.length>0?new Intl.Locale(r[0]):new Intl.Locale(typeof t=="string"?t:t[0])}},e.__parse=Ki,e.formats={number:{integer:{maximumFractionDigits:0},currency:{style:"currency"},percent:{style:"percent"}},date:{short:{month:"numeric",day:"numeric",year:"2-digit"},medium:{month:"short",day:"numeric",year:"numeric"},long:{month:"long",day:"numeric",year:"numeric"},full:{weekday:"long",month:"long",day:"numeric",year:"numeric"}},time:{short:{hour:"numeric",minute:"numeric"},medium:{hour:"numeric",minute:"numeric",second:"numeric"},long:{hour:"numeric",minute:"numeric",second:"numeric",timeZoneName:"short"},full:{hour:"numeric",minute:"numeric",second:"numeric",timeZoneName:"short"}}},e}();function _a(e,t){if(t==null)return;if(t in e)return e[t];const r=t.split(".");let o=e;for(let n=0;n0){const i=r.slice(n,r.length).join(".");if(i in o){o=o[i];break}}o=o[r[n]]}else o=void 0;return o}const ie={},ha=(e,t,r)=>r&&(t in ie||(ie[t]={}),e in ie[t]||(ie[t][e]=r),r),Wr=(e,t)=>{if(t==null)return;if(t in ie&&e in ie[t])return ie[t][e];const r=De(t);for(let o=0;o(r[e]=Ei.all([r[e]||{},...t]),r))}xe([je],([e])=>Object.keys(e));je.subscribe(e=>Bt=e);const Ye={};function ga(e,t){Ye[e].delete(t),Ye[e].size===0&&delete Ye[e]}function Jr(e){return Ye[e]}function ba(e){return De(e).map(t=>{const r=Jr(t);return[t,r?[...r]:[]]}).filter(([,t])=>t.length>0)}function Je(e){return e==null?!1:De(e).some(t=>{var r;return(r=Jr(t))==null?void 0:r.size})}function va(e,t){return Promise.all(t.map(o=>(ga(e,o),o().then(n=>n.default||n)))).then(o=>Yr(e,...o))}const Be={};function Qr(e){if(!Je(e))return e in Be?Be[e]:Promise.resolve();const t=ba(e);return Be[e]=Promise.all(t.map(([r,o])=>va(r,o))).then(()=>{if(Je(e))return Qr(e);delete Be[e]}),Be[e]}var zt=Object.getOwnPropertySymbols,ya=Object.prototype.hasOwnProperty,Ea=Object.prototype.propertyIsEnumerable,wa=(e,t)=>{var r={};for(var o in e)ya.call(e,o)&&t.indexOf(o)<0&&(r[o]=e[o]);if(e!=null&&zt)for(var o of zt(e))t.indexOf(o)<0&&Ea.call(e,o)&&(r[o]=e[o]);return r};const Sa={number:{scientific:{notation:"scientific"},engineering:{notation:"engineering"},compactLong:{notation:"compact",compactDisplay:"long"},compactShort:{notation:"compact",compactDisplay:"short"}},date:{short:{month:"numeric",day:"numeric",year:"2-digit"},medium:{month:"short",day:"numeric",year:"numeric"},long:{month:"long",day:"numeric",year:"numeric"},full:{weekday:"long",month:"long",day:"numeric",year:"numeric"}},time:{short:{hour:"numeric",minute:"numeric"},medium:{hour:"numeric",minute:"numeric",second:"numeric"},long:{hour:"numeric",minute:"numeric",second:"numeric",timeZoneName:"short"},full:{hour:"numeric",minute:"numeric",second:"numeric",timeZoneName:"short"}}};function Ta({locale:e,id:t}){console.warn(`[svelte-i18n] The message "${t}" was not found in "${De(e).join('", "')}".${Je(ue())?` - -Note: there are at least one loader still registered to this locale that wasn't executed.`:""}`)}const xa={fallbackLocale:null,loadingDelay:200,formats:Sa,warnOnMissingMessages:!0,handleMissingMessage:void 0,ignoreTag:!0},Ae=xa;function Ee(){return Ae}function Ha(e){const t=e,{formats:r}=t,o=wa(t,["formats"]);let n=e.fallbackLocale;if(e.initialLocale)try{Xr.resolveLocale(e.initialLocale)&&(n=e.initialLocale)}catch{console.warn(`[svelte-i18n] The initial locale "${e.initialLocale}" is not a valid locale.`)}return o.warnOnMissingMessages&&(delete o.warnOnMissingMessages,o.handleMissingMessage==null?o.handleMissingMessage=Ta:console.warn('[svelte-i18n] The "warnOnMissingMessages" option is deprecated. Please use the "handleMissingMessage" option instead.')),Object.assign(Ae,o,{initialLocale:n}),r&&("number"in r&&Object.assign(Ae.formats.number,r.number),"date"in r&&Object.assign(Ae.formats.date,r.date),"time"in r&&Object.assign(Ae.formats.time,r.time)),He.set(n)}const _t=Te(!1);var Oa=Object.defineProperty,Pa=Object.defineProperties,ka=Object.getOwnPropertyDescriptors,Xt=Object.getOwnPropertySymbols,Ba=Object.prototype.hasOwnProperty,Aa=Object.prototype.propertyIsEnumerable,Wt=(e,t,r)=>t in e?Oa(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,Ca=(e,t)=>{for(var r in t||(t={}))Ba.call(t,r)&&Wt(e,r,t[r]);if(Xt)for(var r of Xt(t))Aa.call(t,r)&&Wt(e,r,t[r]);return e},Ia=(e,t)=>Pa(e,ka(t));let wt;const Qe=Te(null);function Zt(e){return e.split("-").map((t,r,o)=>o.slice(0,r+1).join("-")).reverse()}function De(e,t=Ee().fallbackLocale){const r=Zt(e);return t?[...new Set([...r,...Zt(t)])]:r}function ue(){return wt??void 0}Qe.subscribe(e=>{wt=e??void 0,typeof window<"u"&&e!=null&&document.documentElement.setAttribute("lang",e)});const Na=e=>{if(e&&pa(e)&&Je(e)){const{loadingDelay:t}=Ee();let r;return typeof window<"u"&&ue()!=null&&t?r=window.setTimeout(()=>_t.set(!0),t):_t.set(!0),Qr(e).then(()=>{Qe.set(e)}).finally(()=>{clearTimeout(r),_t.set(!1)})}return Qe.set(e)},He=Ia(Ca({},Qe),{set:Na}),La=()=>typeof window>"u"?null:window.navigator.language||window.navigator.languages[0],rt=e=>{const t=Object.create(null);return o=>{const n=JSON.stringify(o);return n in t?t[n]:t[n]=e(o)}};var Ma=Object.defineProperty,Ke=Object.getOwnPropertySymbols,Kr=Object.prototype.hasOwnProperty,$r=Object.prototype.propertyIsEnumerable,Yt=(e,t,r)=>t in e?Ma(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,At=(e,t)=>{for(var r in t||(t={}))Kr.call(t,r)&&Yt(e,r,t[r]);if(Ke)for(var r of Ke(t))$r.call(t,r)&&Yt(e,r,t[r]);return e},Oe=(e,t)=>{var r={};for(var o in e)Kr.call(e,o)&&t.indexOf(o)<0&&(r[o]=e[o]);if(e!=null&&Ke)for(var o of Ke(e))t.indexOf(o)<0&&$r.call(e,o)&&(r[o]=e[o]);return r};const Le=(e,t)=>{const{formats:r}=Ee();if(e in r&&t in r[e])return r[e][t];throw new Error(`[svelte-i18n] Unknown "${t}" ${e} format.`)},Ra=rt(e=>{var t=e,{locale:r,format:o}=t,n=Oe(t,["locale","format"]);if(r==null)throw new Error('[svelte-i18n] A "locale" must be set to format numbers');return o&&(n=Le("number",o)),new Intl.NumberFormat(r,n)}),ja=rt(e=>{var t=e,{locale:r,format:o}=t,n=Oe(t,["locale","format"]);if(r==null)throw new Error('[svelte-i18n] A "locale" must be set to format dates');return o?n=Le("date",o):Object.keys(n).length===0&&(n=Le("date","short")),new Intl.DateTimeFormat(r,n)}),Da=rt(e=>{var t=e,{locale:r,format:o}=t,n=Oe(t,["locale","format"]);if(r==null)throw new Error('[svelte-i18n] A "locale" must be set to format time values');return o?n=Le("time",o):Object.keys(n).length===0&&(n=Le("time","short")),new Intl.DateTimeFormat(r,n)}),Ga=(e={})=>{var t=e,{locale:r=ue()}=t,o=Oe(t,["locale"]);return Ra(At({locale:r},o))},Ua=(e={})=>{var t=e,{locale:r=ue()}=t,o=Oe(t,["locale"]);return ja(At({locale:r},o))},Fa=(e={})=>{var t=e,{locale:r=ue()}=t,o=Oe(t,["locale"]);return Da(At({locale:r},o))},Va=rt((e,t=ue())=>new Xr(e,t,Ee().formats,{ignoreTag:Ee().ignoreTag})),qa=(e,t={})=>{var r,o,n,i;let a=t;typeof e=="object"&&(a=e,e=a.id);const{values:l,locale:c=ue(),default:s}=a;if(c==null)throw new Error("[svelte-i18n] Cannot format a message without first setting the initial locale.");let u=Wr(e,c);if(!u)u=(i=(n=(o=(r=Ee()).handleMissingMessage)==null?void 0:o.call(r,{locale:c,id:e,defaultValue:s}))!=null?n:s)!=null?i:e;else if(typeof u!="string")return console.warn(`[svelte-i18n] Message with id "${e}" must be of type "string", found: "${typeof u}". Gettin its value through the "$format" method is deprecated; use the "json" method instead.`),u;if(!l)return u;let f=u;try{f=Va(u,c).format(l)}catch(_){_ instanceof Error&&console.warn(`[svelte-i18n] Message "${e}" has syntax error:`,_.message)}return f},za=(e,t)=>Fa(t).format(e),Xa=(e,t)=>Ua(t).format(e),Wa=(e,t)=>Ga(t).format(e),Za=(e,t=ue())=>Wr(e,t),eo=xe([He,je],()=>qa);xe([He],()=>za);xe([He],()=>Xa);xe([He],()=>Wa);xe([He,je],()=>Za);const{SvelteComponent:Ya,append:N,attr:L,binding_callbacks:Ja,component_subscribe:Qa,create_slot:Ka,detach:to,element:K,get_all_dirty_from_scope:$a,get_slot_changes:es,init:ts,insert:ro,safe_not_equal:rs,set_data:ht,set_style:qe,space:Ce,text:ze,toggle_class:de,transition_in:os,transition_out:ns,update_slot_base:is}=window.__gradio__svelte__internal;function Jt(e){let t,r,o,n,i,a,l,c=e[8]("common.built_with")+"",s,u,f,_,m,p,g=e[8]("common.hosted_on")+"",w,P,S;return{c(){t=K("div"),r=K("span"),o=K("a"),n=ze(e[4]),a=Ce(),l=K("span"),s=ze(c),u=Ce(),f=K("a"),f.textContent="Gradio",_=ze("."),m=Ce(),p=K("span"),w=ze(g),P=Ce(),S=K("a"),S.innerHTML=` Spaces`,L(o,"href",i="https://huggingface.co/spaces/"+e[4]),L(o,"class","title svelte-1kyws56"),L(r,"class","svelte-1kyws56"),L(f,"class","gradio svelte-1kyws56"),L(f,"href","https://gradio.app"),L(l,"class","svelte-1kyws56"),L(S,"class","hf svelte-1kyws56"),L(S,"href","https://huggingface.co/spaces"),L(p,"class","svelte-1kyws56"),L(t,"class","info svelte-1kyws56")},m(h,v){ro(h,t,v),N(t,r),N(r,o),N(o,n),N(t,a),N(t,l),N(l,s),N(l,u),N(l,f),N(l,_),N(t,m),N(t,p),N(p,w),N(p,P),N(p,S)},p(h,v){v&16&&ht(n,h[4]),v&16&&i!==(i="https://huggingface.co/spaces/"+h[4])&&L(o,"href",i),v&256&&c!==(c=h[8]("common.built_with")+"")&&ht(s,c),v&256&&g!==(g=h[8]("common.hosted_on")+"")&&ht(w,g)},d(h){h&&to(t)}}}function as(e){let t,r,o,n,i;const a=e[10].default,l=Ka(a,e,e[9],null);let c=e[5]&&e[4]&&e[6]&&Jt(e);return{c(){t=K("div"),r=K("div"),l&&l.c(),o=Ce(),c&&c.c(),L(r,"class","main svelte-1kyws56"),L(t,"class",n="gradio-container gradio-container-"+e[1]+" svelte-1kyws56"),L(t,"data-iframe-height",""),de(t,"app",!e[5]&&!e[3]),de(t,"embed-container",e[5]),de(t,"with-info",e[6]),qe(t,"min-height",e[7]?"initial":e[2]),qe(t,"flex-grow",e[5]?"auto":"1")},m(s,u){ro(s,t,u),N(t,r),l&&l.m(r,null),N(t,o),c&&c.m(t,null),e[11](t),i=!0},p(s,[u]){l&&l.p&&(!i||u&512)&&is(l,a,s,s[9],i?es(a,s[9],u,null):$a(s[9]),null),s[5]&&s[4]&&s[6]?c?c.p(s,u):(c=Jt(s),c.c(),c.m(t,null)):c&&(c.d(1),c=null),(!i||u&2&&n!==(n="gradio-container gradio-container-"+s[1]+" svelte-1kyws56"))&&L(t,"class",n),(!i||u&42)&&de(t,"app",!s[5]&&!s[3]),(!i||u&34)&&de(t,"embed-container",s[5]),(!i||u&66)&&de(t,"with-info",s[6]),u&132&&qe(t,"min-height",s[7]?"initial":s[2]),u&32&&qe(t,"flex-grow",s[5]?"auto":"1")},i(s){i||(os(l,s),i=!0)},o(s){ns(l,s),i=!1},d(s){s&&to(t),l&&l.d(s),c&&c.d(),e[11](null)}}}function ss(e,t,r){let o;Qa(e,eo,g=>r(8,o=g));let{$$slots:n={},$$scope:i}=t,{wrapper:a}=t,{version:l}=t,{initial_height:c}=t,{is_embed:s}=t,{space:u}=t,{display:f}=t,{info:_}=t,{loaded:m}=t;function p(g){Ja[g?"unshift":"push"](()=>{a=g,r(0,a)})}return e.$$set=g=>{"wrapper"in g&&r(0,a=g.wrapper),"version"in g&&r(1,l=g.version),"initial_height"in g&&r(2,c=g.initial_height),"is_embed"in g&&r(3,s=g.is_embed),"space"in g&&r(4,u=g.space),"display"in g&&r(5,f=g.display),"info"in g&&r(6,_=g.info),"loaded"in g&&r(7,m=g.loaded),"$$scope"in g&&r(9,i=g.$$scope)},[a,l,c,s,u,f,_,m,o,i,n,p]}class ls extends Ya{constructor(t){super(),ts(this,t,ss,as,rs,{wrapper:0,version:1,initial_height:2,is_embed:3,space:4,display:5,info:6,loaded:7})}}function me(e){let t=["","k","M","G","T","P","E","Z"],r=0;for(;e>1e3&&rSt(e,t[i],r[i],o[i]));if(typeof r=="object"){const n={};for(const i in r)n[i]=St(e,t[i],r[i],o[i]);return n}else throw new Error(`Cannot spring ${typeof r} values`)}}function Kt(e,t={}){const r=Te(e),{stiffness:o=.15,damping:n=.8,precision:i=.01}=t;let a,l,c,s=e,u=e,f=1,_=0,m=!1;function p(w,P={}){u=w;const S=c={};return e==null||P.hard||g.stiffness>=1&&g.damping>=1?(m=!0,a=Lt(),s=w,r.set(e=u),Promise.resolve()):(P.soft&&(_=1/((P.soft===!0?.5:+P.soft)*60),f=0),l||(a=Lt(),m=!1,l=oi(h=>{if(m)return m=!1,l=null,!1;f=Math.min(f+_,1);const v={inv_mass:f,opts:g,settled:!0,dt:(h-a)*60/1e3},A=St(v,s,e,u);return a=h,s=e,r.set(e=A),v.settled&&(l=null),!v.settled})),new Promise(h=>{l.promise.then(()=>{S===c&&h()})}))}const g={set:p,update:(w,P)=>p(w(u,e),P),subscribe:r.subscribe,stiffness:o,damping:n,precision:i};return g}const{SvelteComponent:us,append:X,attr:H,component_subscribe:$t,detach:cs,element:fs,init:_s,insert:hs,noop:er,safe_not_equal:ds,set_style:Xe,svg_element:W,toggle_class:tr}=window.__gradio__svelte__internal,{onMount:ms}=window.__gradio__svelte__internal;function ps(e){let t,r,o,n,i,a,l,c,s,u,f,_;return{c(){t=fs("div"),r=W("svg"),o=W("g"),n=W("path"),i=W("path"),a=W("path"),l=W("path"),c=W("g"),s=W("path"),u=W("path"),f=W("path"),_=W("path"),H(n,"d","M255.926 0.754768L509.702 139.936V221.027L255.926 81.8465V0.754768Z"),H(n,"fill","#FF7C00"),H(n,"fill-opacity","0.4"),H(n,"class","svelte-zyxd38"),H(i,"d","M509.69 139.936L254.981 279.641V361.255L509.69 221.55V139.936Z"),H(i,"fill","#FF7C00"),H(i,"class","svelte-zyxd38"),H(a,"d","M0.250138 139.937L254.981 279.641V361.255L0.250138 221.55V139.937Z"),H(a,"fill","#FF7C00"),H(a,"fill-opacity","0.4"),H(a,"class","svelte-zyxd38"),H(l,"d","M255.923 0.232622L0.236328 139.936V221.55L255.923 81.8469V0.232622Z"),H(l,"fill","#FF7C00"),H(l,"class","svelte-zyxd38"),Xe(o,"transform","translate("+e[1][0]+"px, "+e[1][1]+"px)"),H(s,"d","M255.926 141.5L509.702 280.681V361.773L255.926 222.592V141.5Z"),H(s,"fill","#FF7C00"),H(s,"fill-opacity","0.4"),H(s,"class","svelte-zyxd38"),H(u,"d","M509.69 280.679L254.981 420.384V501.998L509.69 362.293V280.679Z"),H(u,"fill","#FF7C00"),H(u,"class","svelte-zyxd38"),H(f,"d","M0.250138 280.681L254.981 420.386V502L0.250138 362.295V280.681Z"),H(f,"fill","#FF7C00"),H(f,"fill-opacity","0.4"),H(f,"class","svelte-zyxd38"),H(_,"d","M255.923 140.977L0.236328 280.68V362.294L255.923 222.591V140.977Z"),H(_,"fill","#FF7C00"),H(_,"class","svelte-zyxd38"),Xe(c,"transform","translate("+e[2][0]+"px, "+e[2][1]+"px)"),H(r,"viewBox","-1200 -1200 3000 3000"),H(r,"fill","none"),H(r,"xmlns","http://www.w3.org/2000/svg"),H(r,"class","svelte-zyxd38"),H(t,"class","svelte-zyxd38"),tr(t,"margin",e[0])},m(m,p){hs(m,t,p),X(t,r),X(r,o),X(o,n),X(o,i),X(o,a),X(o,l),X(r,c),X(c,s),X(c,u),X(c,f),X(c,_)},p(m,[p]){p&2&&Xe(o,"transform","translate("+m[1][0]+"px, "+m[1][1]+"px)"),p&4&&Xe(c,"transform","translate("+m[2][0]+"px, "+m[2][1]+"px)"),p&1&&tr(t,"margin",m[0])},i:er,o:er,d(m){m&&cs(t)}}}function gs(e,t,r){let o,n,{margin:i=!0}=t;const a=Kt([0,0]);$t(e,a,_=>r(1,o=_));const l=Kt([0,0]);$t(e,l,_=>r(2,n=_));let c;async function s(){await Promise.all([a.set([125,140]),l.set([-125,-140])]),await Promise.all([a.set([-125,140]),l.set([125,-140])]),await Promise.all([a.set([-125,0]),l.set([125,-0])]),await Promise.all([a.set([125,0]),l.set([-125,0])])}async function u(){await s(),c||u()}async function f(){await Promise.all([a.set([125,0]),l.set([-125,0])]),u()}return ms(()=>(f(),()=>c=!0)),e.$$set=_=>{"margin"in _&&r(0,i=_.margin)},[i,o,n,a,l]}class bs extends us{constructor(t){super(),_s(this,t,gs,ps,ds,{margin:0})}}const{SvelteComponent:vs,append:fe,attr:Y,binding_callbacks:rr,check_outros:oo,create_component:ys,create_slot:Es,destroy_component:ws,destroy_each:no,detach:y,element:ee,empty:Pe,ensure_array_like:$e,get_all_dirty_from_scope:Ss,get_slot_changes:Ts,group_outros:io,init:xs,insert:E,mount_component:Hs,noop:Tt,safe_not_equal:Os,set_data:q,set_style:ae,space:J,text:B,toggle_class:V,transition_in:we,transition_out:Se,update_slot_base:Ps}=window.__gradio__svelte__internal,{tick:ks}=window.__gradio__svelte__internal,{onDestroy:Bs}=window.__gradio__svelte__internal,As=e=>({}),or=e=>({});function nr(e,t,r){const o=e.slice();return o[38]=t[r],o[40]=r,o}function ir(e,t,r){const o=e.slice();return o[38]=t[r],o}function Cs(e){let t,r=e[1]("common.error")+"",o,n,i;const a=e[29].error,l=Es(a,e,e[28],or);return{c(){t=ee("span"),o=B(r),n=J(),l&&l.c(),Y(t,"class","error svelte-119qaqt")},m(c,s){E(c,t,s),fe(t,o),E(c,n,s),l&&l.m(c,s),i=!0},p(c,s){(!i||s[0]&2)&&r!==(r=c[1]("common.error")+"")&&q(o,r),l&&l.p&&(!i||s[0]&268435456)&&Ps(l,a,c,c[28],i?Ts(a,c[28],s,As):Ss(c[28]),or)},i(c){i||(we(l,c),i=!0)},o(c){Se(l,c),i=!1},d(c){c&&(y(t),y(n)),l&&l.d(c)}}}function Is(e){let t,r,o,n,i,a,l,c,s,u=e[8]==="default"&&e[18]&&e[6]==="full"&&ar(e);function f(h,v){if(h[7])return Ms;if(h[2]!==null&&h[3]!==void 0&&h[2]>=0)return Ls;if(h[2]===0)return Ns}let _=f(e),m=_&&_(e),p=e[5]&&ur(e);const g=[Gs,Ds],w=[];function P(h,v){return h[15]!=null?0:h[6]==="full"?1:-1}~(i=P(e))&&(a=w[i]=g[i](e));let S=!e[5]&&pr(e);return{c(){u&&u.c(),t=J(),r=ee("div"),m&&m.c(),o=J(),p&&p.c(),n=J(),a&&a.c(),l=J(),S&&S.c(),c=Pe(),Y(r,"class","progress-text svelte-119qaqt"),V(r,"meta-text-center",e[8]==="center"),V(r,"meta-text",e[8]==="default")},m(h,v){u&&u.m(h,v),E(h,t,v),E(h,r,v),m&&m.m(r,null),fe(r,o),p&&p.m(r,null),E(h,n,v),~i&&w[i].m(h,v),E(h,l,v),S&&S.m(h,v),E(h,c,v),s=!0},p(h,v){h[8]==="default"&&h[18]&&h[6]==="full"?u?u.p(h,v):(u=ar(h),u.c(),u.m(t.parentNode,t)):u&&(u.d(1),u=null),_===(_=f(h))&&m?m.p(h,v):(m&&m.d(1),m=_&&_(h),m&&(m.c(),m.m(r,o))),h[5]?p?p.p(h,v):(p=ur(h),p.c(),p.m(r,null)):p&&(p.d(1),p=null),(!s||v[0]&256)&&V(r,"meta-text-center",h[8]==="center"),(!s||v[0]&256)&&V(r,"meta-text",h[8]==="default");let A=i;i=P(h),i===A?~i&&w[i].p(h,v):(a&&(io(),Se(w[A],1,1,()=>{w[A]=null}),oo()),~i?(a=w[i],a?a.p(h,v):(a=w[i]=g[i](h),a.c()),we(a,1),a.m(l.parentNode,l)):a=null),h[5]?S&&(S.d(1),S=null):S?S.p(h,v):(S=pr(h),S.c(),S.m(c.parentNode,c))},i(h){s||(we(a),s=!0)},o(h){Se(a),s=!1},d(h){h&&(y(t),y(r),y(n),y(l),y(c)),u&&u.d(h),m&&m.d(),p&&p.d(),~i&&w[i].d(h),S&&S.d(h)}}}function ar(e){let t,r=`translateX(${(e[17]||0)*100-100}%)`;return{c(){t=ee("div"),Y(t,"class","eta-bar svelte-119qaqt"),ae(t,"transform",r)},m(o,n){E(o,t,n)},p(o,n){n[0]&131072&&r!==(r=`translateX(${(o[17]||0)*100-100}%)`)&&ae(t,"transform",r)},d(o){o&&y(t)}}}function Ns(e){let t;return{c(){t=B("processing |")},m(r,o){E(r,t,o)},p:Tt,d(r){r&&y(t)}}}function Ls(e){let t,r=e[2]+1+"",o,n,i,a;return{c(){t=B("queue: "),o=B(r),n=B("/"),i=B(e[3]),a=B(" |")},m(l,c){E(l,t,c),E(l,o,c),E(l,n,c),E(l,i,c),E(l,a,c)},p(l,c){c[0]&4&&r!==(r=l[2]+1+"")&&q(o,r),c[0]&8&&q(i,l[3])},d(l){l&&(y(t),y(o),y(n),y(i),y(a))}}}function Ms(e){let t,r=$e(e[7]),o=[];for(let n=0;n{l[f]=null}),oo()),~r?(o=l[r],o?o.p(s,u):(o=l[r]=a[r](s),o.c()),we(o,1),o.m(t,null)):o=null),(!i||u[0]&320&&n!==(n="wrap "+s[8]+" "+s[6]+" svelte-119qaqt"))&&Y(t,"class",n),(!i||u[0]&336)&&V(t,"hide",!s[4]||s[4]==="complete"||s[6]==="hidden"),(!i||u[0]&2384)&&V(t,"translucent",s[8]==="center"&&(s[4]==="pending"||s[4]==="error")||s[11]||s[6]==="minimal"),(!i||u[0]&336)&&V(t,"generating",s[4]==="generating"),(!i||u[0]&4416)&&V(t,"border",s[12]),u[0]&1024&&ae(t,"position",s[10]?"absolute":"static"),u[0]&1024&&ae(t,"padding",s[10]?"0":"var(--size-8) 0")},i(s){i||(we(o),i=!0)},o(s){Se(o),i=!1},d(s){s&&y(t),~r&&l[r].d(),e[31](null)}}}let We=[],dt=!1;async function Vs(e,t=!0){if(!(window.__gradio_mode__==="website"||window.__gradio_mode__!=="app"&&t!==!0)){if(We.push(e),!dt)dt=!0;else return;await ks(),requestAnimationFrame(()=>{let r=[0,0];for(let o=0;o{r(25,U=performance.now()),r(26,D=0),re=!0,Ue()};function Ue(){requestAnimationFrame(()=>{r(26,D=(performance.now()-U)/1e3),re&&Ue()})}function ke(){r(26,D=0),re&&(re=!1)}Bs(()=>{re&&ke()});let Q=null;function ce(b){rr[b?"unshift":"push"](()=>{j=b,r(16,j),r(7,w),r(14,Z),r(15,R)})}function Fe(b){rr[b?"unshift":"push"](()=>{z=b,r(13,z)})}return e.$$set=b=>{"i18n"in b&&r(1,a=b.i18n),"eta"in b&&r(0,l=b.eta),"queue"in b&&r(21,c=b.queue),"queue_position"in b&&r(2,s=b.queue_position),"queue_size"in b&&r(3,u=b.queue_size),"status"in b&&r(4,f=b.status),"scroll_to_output"in b&&r(22,_=b.scroll_to_output),"timer"in b&&r(5,m=b.timer),"show_progress"in b&&r(6,p=b.show_progress),"message"in b&&r(23,g=b.message),"progress"in b&&r(7,w=b.progress),"variant"in b&&r(8,P=b.variant),"loading_text"in b&&r(9,S=b.loading_text),"absolute"in b&&r(10,h=b.absolute),"translucent"in b&&r(11,v=b.translucent),"border"in b&&r(12,A=b.border),"autoscroll"in b&&r(24,G=b.autoscroll),"$$scope"in b&&r(28,i=b.$$scope)},e.$$.update=()=>{e.$$.dirty[0]&169869313&&(l===null?r(0,l=oe):c&&r(0,l=(performance.now()-U)/1e3+l),l!=null&&(r(19,Q=l.toFixed(1)),r(27,oe=l))),e.$$.dirty[0]&67108865&&r(17,C=l===null||l<=0||!D?null:Math.min(D/l,1)),e.$$.dirty[0]&128&&w!=null&&r(18,Ge=!1),e.$$.dirty[0]&114816&&(w!=null?r(14,Z=w.map(b=>{if(b.index!=null&&b.length!=null)return b.index/b.length;if(b.progress!=null)return b.progress})):r(14,Z=null),Z?(r(15,R=Z[Z.length-1]),j&&(R===0?r(16,j.style.transition="0",j):r(16,j.style.transition="150ms",j))):r(15,R=void 0)),e.$$.dirty[0]&16&&(f==="pending"?at():ke()),e.$$.dirty[0]&20979728&&z&&_&&(f==="pending"||f==="complete")&&Vs(z,G),e.$$.dirty[0]&8388624,e.$$.dirty[0]&67108864&&r(20,o=D.toFixed(1))},[l,a,s,u,f,m,p,w,P,S,h,v,A,z,Z,R,j,C,Ge,Q,o,c,_,g,G,U,D,oe,i,n,ce,Fe]}class zs extends vs{constructor(t){super(),xs(this,t,qs,Fs,Os,{i18n:1,eta:0,queue:21,queue_position:2,queue_size:3,status:4,scroll_to_output:22,timer:5,show_progress:6,message:23,progress:7,variant:8,loading_text:9,absolute:10,translucent:11,border:12,autoscroll:24},null,[-1,-1])}}const ao={built_with_gradio:"تم الإنشاء بإستخدام Gradio",clear:"أمسح",or:"أو",submit:"أرسل"},so={click_to_upload:"إضغط للتحميل",drop_audio:"أسقط الملف الصوتي هنا",drop_csv:"أسقط ملف البيانات هنا",drop_file:"أسقط الملف هنا",drop_image:"أسقط الصورة هنا",drop_video:"أسقط الفيديو هنا"},Xs={common:ao,upload_text:so},Ws=Object.freeze(Object.defineProperty({__proto__:null,common:ao,default:Xs,upload_text:so},Symbol.toStringTag,{value:"Module"})),lo={built_with_gradio:"Construït amb gradio",clear:"Neteja",empty:"Buit",error:"Error",loading:"S'està carregant",or:"o",submit:"Envia"},uo={click_to_upload:"Feu clic per pujar",drop_audio:"Deixeu anar l'àudio aquí",drop_csv:"Deixeu anar el CSV aquí",drop_file:"Deixeu anar el fitxer aquí",drop_image:"Deixeu anar la imatge aquí",drop_video:"Deixeu anar el vídeo aquí"},Zs={common:lo,upload_text:uo},Ys=Object.freeze(Object.defineProperty({__proto__:null,common:lo,default:Zs,upload_text:uo},Symbol.toStringTag,{value:"Module"})),co={annotated_image:"وێنەی نیشانە کراو"},fo={allow_recording_access:"تکایە ڕێگە بدە بە بەکارهێنانی مایکرۆفۆنەکە بۆ تۆمارکردن.",audio:"دەنگ",record_from_microphone:"تۆمارکردن لە مایکەوە",stop_recording:"تۆمارکردن بوەستێنە"},_o={connection_can_break:"لە مۆبایلدا، پەیوەندییەکە دەکرێت بپچڕێت ئەگەر ئەم تابە چالاک نەبێت یان ئامێرەکە بچێتە دۆخی پشوو، ئەمەش شوێنی خۆت لە ڕیزدا لەدەست دەدات.",long_requests_queue:"ڕیزێکی درێژی داواکاری هەیە. ئەم سپەیسە دووباد بکە بۆی چاوەڕوان نەبیت.",lost_connection:"پەیوەندی پچڕا بەهۆی جێهێشتنی پەیج. "},ho={checkbox:"بۆکسی هەڵبژاردن",checkbox_group:"گروپی بۆکسی هەڵبژاردن"},mo={code:"کۆد"},po={color_picker:"ڕەنگ هەڵبژاردە"},go={built_with:"دروستکراوە لەگەڵ...",built_with_gradio:"Gradio دروستکراوە بە",clear:"خاوێنکردنەوە",download:"دابەزاندن",edit:"بژارکردن",empty:"بەتاڵ",error:"هەڵە",hosted_on:"میوانداری کراوە لە",loading:"بارکردن",logo:"لۆگۆ",or:"یان",remove:"لابردن",share:"هاوبەشکردن",submit:"پێشکەشکردن",undo:"پووچکردنەوە"},bo={incorrect_format:"فۆرماتێکی هەڵە، تەنها فایلەکانی CSV و TSV پشتگیری دەکرێن",new_column:"ستوونی نوێ",new_row:"ڕیزێکی نوێ"},vo={dropdown:"فڕێدانە خوار"},yo={build_error:"هەڵەی دروستکردن هەیە",config_error:"هەڵەی ڕێکخستن هەیە",contact_page_author:"تکایە پەیوەندی بە نووسەری پەیجەوە بکەن بۆ ئەوەی ئاگاداریان بکەنەوە.",no_app_file:"هیچ فایلێکی ئەپ نییە",runtime_error:"هەڵەیەکی runtime هەیە",space_not_working:'"سپەیسەکە کارناکات چونکە" {0}',space_paused:"فەزاکە وەستاوە",use_via_api:"لە ڕێگەی API بەکاری بهێنە"},Eo={uploading:"بارکردن..."},wo={highlighted_text:"دەقی ڕۆشن کراو"},So={allow_webcam_access:"تکایە ڕێگە بدە بە بەکارهێنانی وێبکامەکە بۆ تۆمارکردن.",brush_color:"ڕەنگی فڵچە",brush_radius:"تیژڕەوی فڵچە",image:"وێنە",remove_image:"لابردنی وێنە",select_brush_color:"ڕەنگی فڵچە هەڵبژێرە",start_drawing:"دەست بکە بە وێنەکێشان",use_brush:"فڵچە بەکاربهێنە"},To={label:"لەیبڵ"},xo={enable_cookies:"ئەگەر تۆ سەردانی HuggingFace Space دەکەیت لە دۆخی نادیاردا، پێویستە کووکی لایەنی سێیەم چالاک بکەیت.",incorrect_credentials:"بڕوانامەی هەڵە",login:"چونه‌ ژووره‌وه‌"},Ho={number:"ژمارە"},Oo={plot:"هێڵکاری"},Po={radio:"ڕادیۆ"},ko={slider:"خلیسکە"},Bo={click_to_upload:"کلیک بکە بۆ بارکردن",drop_audio:"دەنگ لێرە دابنێ",drop_csv:"لێرەدا CSV دابنێ",drop_file:"فایل لێرە دابنێ",drop_image:"وێنە لێرەدا دابنێ",drop_video:"ڤیدیۆ لێرە دابنێ"},Js={"3D_model":{"3d_model":"مۆدێلی سێ ڕەهەندی"},annotated_image:co,audio:fo,blocks:_o,checkbox:ho,code:mo,color_picker:po,common:go,dataframe:bo,dropdown:vo,errors:yo,file:Eo,highlighted_text:wo,image:So,label:To,login:xo,number:Ho,plot:Oo,radio:Po,slider:ko,upload_text:Bo},Qs=Object.freeze(Object.defineProperty({__proto__:null,annotated_image:co,audio:fo,blocks:_o,checkbox:ho,code:mo,color_picker:po,common:go,dataframe:bo,default:Js,dropdown:vo,errors:yo,file:Eo,highlighted_text:wo,image:So,label:To,login:xo,number:Ho,plot:Oo,radio:Po,slider:ko,upload_text:Bo},Symbol.toStringTag,{value:"Module"})),Ao={built_with_gradio:"Mit Gradio erstellt",clear:"Löschen",or:"oder",submit:"Absenden"},Co={click_to_upload:"Hochladen",drop_audio:"Audio hier ablegen",drop_csv:"CSV Datei hier ablegen",drop_file:"Datei hier ablegen",drop_image:"Bild hier ablegen",drop_video:"Video hier ablegen"},Ks={common:Ao,upload_text:Co},$s=Object.freeze(Object.defineProperty({__proto__:null,common:Ao,default:Ks,upload_text:Co},Symbol.toStringTag,{value:"Module"})),Io={annotated_image:"Annotated Image"},No={allow_recording_access:"Please allow access to the microphone for recording.",audio:"Audio",record_from_microphone:"Record from microphone",stop_recording:"Stop recording",no_device_support:"Media devices could not be accessed. Check that you are running on a secure origin (https) or localhost (or you have passed a valid SSL certificate to ssl_verify), and you have allowed browser access to your device.",stop:"Stop",resume:"Resume",record:"Record",no_microphone:"No microphone found"},Lo={connection_can_break:"On mobile, the connection can break if this tab is unfocused or the device sleeps, losing your position in queue.",long_requests_queue:"There is a long queue of requests pending. Duplicate this Space to skip.",lost_connection:"Lost connection due to leaving page. Rejoining queue..."},Mo={checkbox:"Checkbox",checkbox_group:"Checkbox Group"},Ro={code:"Code"},jo={color_picker:"Color Picker"},Do={built_with:"built with",built_with_gradio:"Built with Gradio",clear:"Clear",download:"Download",edit:"Edit",empty:"Empty",error:"Error",hosted_on:"Hosted on",loading:"Loading",logo:"logo",or:"or",remove:"Remove",share:"Share",submit:"Submit",undo:"Undo"},Go={incorrect_format:"Incorrect format, only CSV and TSV files are supported",new_column:"New column",new_row:"New row"},Uo={dropdown:"Dropdown"},Fo={build_error:"there is a build error",config_error:"there is a config error",contact_page_author:"Please contact the author of the page to let them know.",no_app_file:"there is no app file",runtime_error:"there is a runtime error",space_not_working:`"Space isn't working because" {0}`,space_paused:"the space is paused",use_via_api:"Use via API"},Vo={uploading:"Uploading..."},qo={highlighted_text:"Highlighted Text"},zo={allow_webcam_access:"Please allow access to the webcam for recording.",brush_color:"Brush color",brush_radius:"Brush radius",image:"Image",remove_image:"Remove Image",select_brush_color:"Select brush color",start_drawing:"Start drawing",use_brush:"Use brush"},Xo={label:"Label"},Wo={enable_cookies:"If you are visiting a HuggingFace Space in Incognito mode, you must enable third party cookies.",incorrect_credentials:"Incorrect Credentials",login:"Login"},Zo={number:"Number"},Yo={plot:"Plot"},Jo={radio:"Radio"},Qo={slider:"Slider"},Ko={click_to_upload:"Click to Upload",drop_audio:"Drop Audio Here",drop_csv:"Drop CSV Here",drop_file:"Drop File Here",drop_image:"Drop Image Here",drop_video:"Drop Video Here"},el={"3D_model":{"3d_model":"3D Model"},annotated_image:Io,audio:No,blocks:Lo,checkbox:Mo,code:Ro,color_picker:jo,common:Do,dataframe:Go,dropdown:Uo,errors:Fo,file:Vo,highlighted_text:qo,image:zo,label:Xo,login:Wo,number:Zo,plot:Yo,radio:Jo,slider:Qo,upload_text:Ko},tl=Object.freeze(Object.defineProperty({__proto__:null,annotated_image:Io,audio:No,blocks:Lo,checkbox:Mo,code:Ro,color_picker:jo,common:Do,dataframe:Go,default:el,dropdown:Uo,errors:Fo,file:Vo,highlighted_text:qo,image:zo,label:Xo,login:Wo,number:Zo,plot:Yo,radio:Jo,slider:Qo,upload_text:Ko},Symbol.toStringTag,{value:"Module"})),$o={built_with_gradio:"Construido con Gradio",clear:"Limpiar",or:"o",submit:"Enviar"},en={click_to_upload:"Haga click para cargar",drop_audio:"Coloque el audio aquí",drop_csv:"Coloque el CSV aquí",drop_file:"Coloque el archivo aquí",drop_image:"Coloque la imagen aquí",drop_video:"Coloque el video aquí"},rl={common:$o,upload_text:en},ol=Object.freeze(Object.defineProperty({__proto__:null,common:$o,default:rl,upload_text:en},Symbol.toStringTag,{value:"Module"})),tn={built_with_gradio:"Gradiorekin eraikia",clear:"Garbitu",or:"edo",submit:"Bidali"},rn={click_to_upload:"Klik egin kargatzeko",drop_audio:"Jarri hemen audioa",drop_csv:"Jarri hemen CSVa",drop_file:"Jarri hemen fitxategia",drop_image:"Jarri hemen irudia",drop_video:"Jarri hemen bideoa"},nl={common:tn,upload_text:rn},il=Object.freeze(Object.defineProperty({__proto__:null,common:tn,default:nl,upload_text:rn},Symbol.toStringTag,{value:"Module"})),on={built_with_gradio:"ساخته شده با gradio",clear:"حذف",or:"یا",submit:"ارسال"},nn={click_to_upload:"برای آپلود کلیک کنید",drop_audio:"صوت را اینجا رها کنید",drop_csv:"فایل csv را اینجا رها کنید",drop_file:"فایل را اینجا رها کنید",drop_image:"تصویر را اینجا رها کنید",drop_video:"ویدیو را اینجا رها کنید"},al={common:on,upload_text:nn},sl=Object.freeze(Object.defineProperty({__proto__:null,common:on,default:al,upload_text:nn},Symbol.toStringTag,{value:"Module"})),an={allow_recording_access:"Veuillez autoriser l'accès à l'enregistrement",audio:"Audio",record_from_microphone:"Enregistrer avec le microphone",stop_recording:"Arrêter l'enregistrement"},sn={built_with:"Construit avec",built_with_gradio:"Construit avec Gradio",clear:"Effacer",download:"Télécharger",edit:"Éditer",error:"Erreur",loading:"Chargement",logo:"logo",or:"ou",remove:"Supprimer",share:"Partager",submit:"Soumettre"},ln={click_to_upload:"Cliquer pour Télécharger",drop_audio:"Déposer l'Audio Ici",drop_csv:"Déposer le CSV Ici",drop_file:"Déposer le Fichier Ici",drop_image:"Déposer l'Image Ici",drop_video:"Déposer la Vidéo Ici"},ll={audio:an,common:sn,upload_text:ln},ul=Object.freeze(Object.defineProperty({__proto__:null,audio:an,common:sn,default:ll,upload_text:ln},Symbol.toStringTag,{value:"Module"})),un={built_with_gradio:"בנוי עם גרדיו",clear:"נקה",or:"או",submit:"שלח"},cn={click_to_upload:"לחץ כדי להעלות",drop_audio:"גרור לכאן קובץ שמע",drop_csv:"גרור csv קובץ לכאן",drop_file:"גרור קובץ לכאן",drop_image:"גרור קובץ תמונה לכאן",drop_video:"גרור קובץ סרטון לכאן"},cl={common:un,upload_text:cn},fl=Object.freeze(Object.defineProperty({__proto__:null,common:un,default:cl,upload_text:cn},Symbol.toStringTag,{value:"Module"})),fn={built_with_gradio:"Gradio से बना",clear:"हटाये",or:"या",submit:"सबमिट करे"},_n={click_to_upload:"अपलोड के लिए बटन दबायें",drop_audio:"यहाँ ऑडियो ड्रॉप करें",drop_csv:"यहाँ CSV ड्रॉप करें",drop_file:"यहाँ File ड्रॉप करें",drop_image:"यहाँ इमेज ड्रॉप करें",drop_video:"यहाँ वीडियो ड्रॉप करें"},_l={common:fn,upload_text:_n},hl=Object.freeze(Object.defineProperty({__proto__:null,common:fn,default:_l,upload_text:_n},Symbol.toStringTag,{value:"Module"})),hn={built_with_gradio:"gradioで作ろう",clear:"クリア",or:"または",submit:"送信"},dn={click_to_upload:"クリックしてアップロード",drop_audio:"ここに音声をドロップ",drop_csv:"ここにCSVをドロップ",drop_file:"ここにファイルをドロップ",drop_image:"ここに画像をドロップ",drop_video:"ここに動画をドロップ"},dl={common:hn,upload_text:dn},ml=Object.freeze(Object.defineProperty({__proto__:null,common:hn,default:dl,upload_text:dn},Symbol.toStringTag,{value:"Module"})),mn={built_with_gradio:"gradio로 제작되었습니다",clear:"클리어",or:"또는",submit:"제출하기"},pn={click_to_upload:"클릭해서 업로드하기",drop_audio:"오디오를 끌어 놓으세요",drop_csv:"CSV파일을 끌어 놓으세요",drop_file:"파일을 끌어 놓으세요",drop_image:"이미지를 끌어 놓으세요",drop_video:"비디오를 끌어 놓으세요"},pl={common:mn,upload_text:pn},gl=Object.freeze(Object.defineProperty({__proto__:null,common:mn,default:pl,upload_text:pn},Symbol.toStringTag,{value:"Module"})),gn={built_with_gradio:"sukurta su gradio",clear:"Trinti",or:"arba",submit:"Pateikti"},bn={click_to_upload:"Spustelėkite norėdami įkelti",drop_audio:"Įkelkite garso įrašą čia",drop_csv:"Įkelkite CSV čia",drop_file:"Įkelkite bylą čia",drop_image:"Įkelkite paveikslėlį čia",drop_video:"Įkelkite vaizdo įrašą čia"},bl={common:gn,upload_text:bn},vl=Object.freeze(Object.defineProperty({__proto__:null,common:gn,default:bl,upload_text:bn},Symbol.toStringTag,{value:"Module"})),vn={built_with_gradio:"gemaakt met gradio",clear:"Wis",or:"of",submit:"Zend in"},yn={click_to_upload:"Klik om the Uploaden",drop_audio:"Sleep een Geluidsbestand hier",drop_csv:"Sleep een CSV hier",drop_file:"Sleep een Document hier",drop_image:"Sleep een Afbeelding hier",drop_video:"Sleep een Video hier"},yl={common:vn,upload_text:yn},El=Object.freeze(Object.defineProperty({__proto__:null,common:vn,default:yl,upload_text:yn},Symbol.toStringTag,{value:"Module"})),En={built_with_gradio:"utworzone z gradio",clear:"Wyczyść",or:"lub",submit:"Zatwierdź"},wn={click_to_upload:"Kliknij, aby przesłać",drop_audio:"Przeciągnij tutaj audio",drop_csv:"Przeciągnij tutaj CSV",drop_file:"Przeciągnij tutaj plik",drop_image:"Przeciągnij tutaj zdjęcie",drop_video:"Przeciągnij tutaj video"},wl={common:En,upload_text:wn},Sl=Object.freeze(Object.defineProperty({__proto__:null,common:En,default:wl,upload_text:wn},Symbol.toStringTag,{value:"Module"})),Sn={built_with_gradio:"Construído com gradio",clear:"Limpar",error:"Erro",flag:"Marcar",loading:"Carregando",or:"ou",submit:"Enviar"},Tn={click_to_upload:"Clique para o Upload",drop_audio:"Solte o Áudio Aqui",drop_csv:"Solte o CSV Aqui",drop_file:"Solte o Arquivo Aqui",drop_image:"Solte a Imagem Aqui",drop_video:"Solte o Vídeo Aqui"},Tl={common:Sn,upload_text:Tn},xl=Object.freeze(Object.defineProperty({__proto__:null,common:Sn,default:Tl,upload_text:Tn},Symbol.toStringTag,{value:"Module"})),xn={built_with_gradio:"сделано с помощью gradio",clear:"Очистить",or:"или",submit:"Исполнить"},Hn={click_to_upload:"Нажмите, чтобы загрузить",drop_audio:"Поместите Аудио Здесь",drop_csv:"Поместите CSV Здесь",drop_file:"Поместите Документ Здесь",drop_image:"Поместите Изображение Здесь",drop_video:"Поместите Видео Здесь"},Hl={common:xn,upload_text:Hn},Ol=Object.freeze(Object.defineProperty({__proto__:null,common:xn,default:Hl,upload_text:Hn},Symbol.toStringTag,{value:"Module"})),On={built_with_gradio:"கிரேடியோ வுடன் உருவாக்கப்பட்டது",clear:"அழிக்கவும்",or:"அல்லது",submit:"சமர்ப்பிக்கவும்"},Pn={click_to_upload:"பதிவேற்ற அழுத்தவும்",drop_audio:"ஆடியோவை பதிவேற்றவும்",drop_csv:"csv ஐ பதிவேற்றவும்",drop_file:"கோப்பை பதிவேற்றவும்",drop_image:"படத்தை பதிவேற்றவும்",drop_video:"காணொளியை பதிவேற்றவும்"},Pl={common:On,upload_text:Pn},kl=Object.freeze(Object.defineProperty({__proto__:null,common:On,default:Pl,upload_text:Pn},Symbol.toStringTag,{value:"Module"})),kn={built_with_gradio:"Gradio ile oluşturulmuştur",clear:"Temizle",or:"veya",submit:"Yükle"},Bn={click_to_upload:"Yüklemek için Tıkla",drop_audio:"Kaydı Buraya Sürükle",drop_csv:"CSV'yi Buraya Sürükle",drop_file:"Dosyayı Buraya Sürükle",drop_image:"Resmi Buraya Sürükle",drop_video:"Videoyu Buraya Sürükle"},Bl={common:kn,upload_text:Bn},Al=Object.freeze(Object.defineProperty({__proto__:null,common:kn,default:Bl,upload_text:Bn},Symbol.toStringTag,{value:"Module"})),An={built_with_gradio:"Зроблено на основі gradio",clear:"Очистити",or:"або",submit:"Надіслати"},Cn={click_to_upload:"Натисніть щоб завантажити",drop_audio:"Перетягніть аудіо сюди",drop_csv:"Перетягніть CSV-файл сюди",drop_file:"Перетягніть файл сюди",drop_image:"Перетягніть зображення сюди",drop_video:"Перетягніть відео сюди"},Cl={common:An,upload_text:Cn},Il=Object.freeze(Object.defineProperty({__proto__:null,common:An,default:Cl,upload_text:Cn},Symbol.toStringTag,{value:"Module"})),In={built_with_gradio:"کے ساتھ بنایا گیا Gradio",clear:"ہٹا دیں",or:"یا",submit:"جمع کریں"},Nn={click_to_upload:"اپ لوڈ کے لیے کلک کریں",drop_audio:"یہاں آڈیو ڈراپ کریں",drop_csv:"یہاں فائل ڈراپ کریں",drop_file:"یہاں فائل ڈراپ کریں",drop_image:"یہاں تصویر ڈراپ کریں",drop_video:"یہاں ویڈیو ڈراپ کریں"},Nl={common:In,upload_text:Nn},Ll=Object.freeze(Object.defineProperty({__proto__:null,common:In,default:Nl,upload_text:Nn},Symbol.toStringTag,{value:"Module"})),Ln={built_with_gradio:"gradio bilan qilingan",clear:"Tozalash",submit:"Yubor"},Mn={click_to_upload:"Yuklash uchun Bosing",drop_audio:"Audioni Shu Yerga Tashlang",drop_csv:"CSVni Shu Yerga Tashlang",drop_file:"Faylni Shu Yerga Tashlang",drop_image:"Rasmni Shu Yerga Tashlang",drop_video:"Videoni Shu Yerga Tashlang"},Ml={common:Ln,upload_text:Mn},Rl=Object.freeze(Object.defineProperty({__proto__:null,common:Ln,default:Ml,upload_text:Mn},Symbol.toStringTag,{value:"Module"})),Rn={built_with_gradio:"使用Gradio构建",clear:"清除",or:"或",submit:"提交"},jn={click_to_upload:"点击上传",drop_audio:"拖放音频至此处",drop_csv:"拖放CSV至此处",drop_file:"拖放文件至此处",drop_image:"拖放图片至此处",drop_video:"拖放视频至此处"},jl={common:Rn,upload_text:jn},Dl=Object.freeze(Object.defineProperty({__proto__:null,common:Rn,default:jl,upload_text:jn},Symbol.toStringTag,{value:"Module"})),Dn={built_with_gradio:"使用Gradio構建",clear:"清除",or:"或",submit:"提交"},Gn={click_to_upload:"點擊上傳",drop_audio:"拖放音訊至此處",drop_csv:"拖放CSV至此處",drop_file:"拖放檔案至此處",drop_image:"拖放圖片至此處",drop_video:"拖放影片至此處"},Gl={common:Dn,upload_text:Gn},Ul=Object.freeze(Object.defineProperty({__proto__:null,common:Dn,default:Gl,upload_text:Gn},Symbol.toStringTag,{value:"Module"})),gr=Object.assign({"./lang/ar.json":Ws,"./lang/ca.json":Ys,"./lang/ckb.json":Qs,"./lang/de.json":$s,"./lang/en.json":tl,"./lang/es.json":ol,"./lang/eu.json":il,"./lang/fa.json":sl,"./lang/fr.json":ul,"./lang/he.json":fl,"./lang/hi.json":hl,"./lang/ja.json":ml,"./lang/ko.json":gl,"./lang/lt.json":vl,"./lang/nl.json":El,"./lang/pl.json":Sl,"./lang/pt-BR.json":xl,"./lang/ru.json":Ol,"./lang/ta.json":kl,"./lang/tr.json":Al,"./lang/uk.json":Il,"./lang/ur.json":Ll,"./lang/uz.json":Rl,"./lang/zh-CN.json":Dl,"./lang/zh-TW.json":Ul});function Fl(){let e={};for(const t in gr){const r=t.split("/").pop().split(".").shift();e[r]=gr[t].default}return e}const br=Fl();for(const e in br)Yr(e,br[e]);async function Vl(){await Ha({fallbackLocale:"en",initialLocale:La()})}const{setContext:ql,getContext:zl}=window.__gradio__svelte__internal,Un="WORKER_PROXY_CONTEXT_KEY";function Xl(e){ql(Un,e)}function bu(){return zl(Un)}const{SvelteComponent:Wl,add_flush_callback:xt,append:te,assign:Zl,attr:se,bind:Ht,binding_callbacks:Ot,check_outros:vr,component_subscribe:yr,create_component:ot,destroy_component:nt,detach:Me,element:ge,empty:Yl,get_spread_object:Jl,get_spread_update:Ql,group_outros:Er,init:Kl,insert:Re,mount_component:it,safe_not_equal:$l,set_data:Fn,space:Vn,text:Ie,transition_in:$,transition_out:le}=window.__gradio__svelte__internal,{onMount:wr,setContext:eu}=window.__gradio__svelte__internal;function Sr(e){let t,r;return t=new zs({props:{absolute:!e[4],status:e[14],timer:!1,queue_position:null,queue_size:null,translucent:!0,loading_text:e[15],i18n:e[22],autoscroll:e[0],$$slots:{error:[ou]},$$scope:{ctx:e}}}),{c(){ot(t.$$.fragment)},m(o,n){it(t,o,n),r=!0},p(o,n){const i={};n[0]&16&&(i.absolute=!o[4]),n[0]&16384&&(i.status=o[14]),n[0]&32768&&(i.loading_text=o[15]),n[0]&4194304&&(i.i18n=o[22]),n[0]&1&&(i.autoscroll=o[0]),n[0]&4202752|n[1]&65536&&(i.$$scope={dirty:n,ctx:o}),t.$set(i)},i(o){r||($(t.$$.fragment,o),r=!0)},o(o){le(t.$$.fragment,o),r=!1},d(o){nt(t,o)}}}function tu(e){let t,r=e[22]("errors.contact_page_author")+"",o;return{c(){t=ge("p"),o=Ie(r),se(t,"class","svelte-y6l4b")},m(n,i){Re(n,t,i),te(t,o)},p(n,i){i[0]&4194304&&r!==(r=n[22]("errors.contact_page_author")+"")&&Fn(o,r)},d(n){n&&Me(t)}}}function ru(e){let t,r,o,n,i,a;return{c(){t=ge("p"),r=Ie("Please "),o=ge("a"),n=Ie("contact the author of the space"),a=Ie(" to let them know."),se(o,"href",i="https://huggingface.co/spaces/"+e[8]+"/discussions/new?title="+e[23].title(e[13]?.detail)+"&description="+e[23].description(e[13]?.detail,location.origin)),se(o,"class","svelte-y6l4b"),se(t,"class","svelte-y6l4b")},m(l,c){Re(l,t,c),te(t,r),te(t,o),te(o,n),te(t,a)},p(l,c){c[0]&8448&&i!==(i="https://huggingface.co/spaces/"+l[8]+"/discussions/new?title="+l[23].title(l[13]?.detail)+"&description="+l[23].description(l[13]?.detail,location.origin))&&se(o,"href",i)},d(l){l&&Me(t)}}}function ou(e){let t,r,o,n=(e[13]?.message||"")+"",i,a;function l(u,f){return(u[13].status==="space_error"||u[13].status==="paused")&&u[13].discussions_enabled?ru:tu}let c=l(e),s=c(e);return{c(){t=ge("div"),r=ge("p"),o=ge("strong"),i=Ie(n),a=Vn(),s.c(),se(r,"class","svelte-y6l4b"),se(t,"class","error svelte-y6l4b"),se(t,"slot","error")},m(u,f){Re(u,t,f),te(t,r),te(r,o),te(o,i),te(t,a),s.m(t,null)},p(u,f){f[0]&8192&&n!==(n=(u[13]?.message||"")+"")&&Fn(i,n),c===(c=l(u))&&s?s.p(u,f):(s.d(1),s=c(u),s&&(s.c(),s.m(t,null)))},d(u){u&&Me(t),s.d()}}}function nu(e){let t,r,o,n;const i=[{app:e[18]},e[12],{theme_mode:e[16]},{control_page_title:e[5]},{target:e[9]},{autoscroll:e[0]},{show_footer:!e[4]},{app_mode:e[3]},{version:e[1]},{api_url:e[17]}];function a(s){e[33](s)}function l(s){e[34](s)}let c={};for(let s=0;sHt(t,"ready",a)),Ot.push(()=>Ht(t,"render_complete",l)),{c(){ot(t.$$.fragment)},m(s,u){it(t,s,u),n=!0},p(s,u){const f=u[0]&463419?Ql(i,[u[0]&262144&&{app:s[18]},u[0]&4096&&Jl(s[12]),u[0]&65536&&{theme_mode:s[16]},u[0]&32&&{control_page_title:s[5]},u[0]&512&&{target:s[9]},u[0]&1&&{autoscroll:s[0]},u[0]&16&&{show_footer:!s[4]},u[0]&8&&{app_mode:s[3]},u[0]&2&&{version:s[1]},u[0]&131072&&{api_url:s[17]}]):{};!r&&u[0]&1024&&(r=!0,f.ready=s[10],xt(()=>r=!1)),!o&&u[0]&2048&&(o=!0,f.render_complete=s[11],xt(()=>o=!1)),t.$set(f)},i(s){n||($(t.$$.fragment,s),n=!0)},o(s){le(t.$$.fragment,s),n=!1},d(s){nt(t,s)}}}function iu(e){let t,r;return t=new e[21]({props:{auth_message:e[12].auth_message,root:e[12].root,space_id:e[8],app_mode:e[3]}}),{c(){ot(t.$$.fragment)},m(o,n){it(t,o,n),r=!0},p(o,n){const i={};n[0]&4096&&(i.auth_message=o[12].auth_message),n[0]&4096&&(i.root=o[12].root),n[0]&256&&(i.space_id=o[8]),n[0]&8&&(i.app_mode=o[3]),t.$set(i)},i(o){r||($(t.$$.fragment,o),r=!0)},o(o){le(t.$$.fragment,o),r=!1},d(o){nt(t,o)}}}function au(e){let t,r,o,n,i,a=(e[14]==="pending"||e[14]==="error")&&!(e[12]&&e[12]?.auth_required)&&Sr(e);const l=[iu,nu],c=[];function s(u,f){return u[12]?.auth_required&&u[21]?0:u[12]&&u[20]&&u[19]?1:-1}return~(r=s(e))&&(o=c[r]=l[r](e)),{c(){a&&a.c(),t=Vn(),o&&o.c(),n=Yl()},m(u,f){a&&a.m(u,f),Re(u,t,f),~r&&c[r].m(u,f),Re(u,n,f),i=!0},p(u,f){(u[14]==="pending"||u[14]==="error")&&!(u[12]&&u[12]?.auth_required)?a?(a.p(u,f),f[0]&20480&&$(a,1)):(a=Sr(u),a.c(),$(a,1),a.m(t.parentNode,t)):a&&(Er(),le(a,1,1,()=>{a=null}),vr());let _=r;r=s(u),r===_?~r&&c[r].p(u,f):(o&&(Er(),le(c[_],1,1,()=>{c[_]=null}),vr()),~r?(o=c[r],o?o.p(u,f):(o=c[r]=l[r](u),o.c()),$(o,1),o.m(n.parentNode,n)):o=null)},i(u){i||($(a),$(o),i=!0)},o(u){le(a),le(o),i=!1},d(u){u&&(Me(t),Me(n)),a&&a.d(u),~r&&c[r].d(u)}}}function su(e){let t,r,o;function n(a){e[35](a)}let i={display:e[6]&&e[4],is_embed:e[4],info:!!e[8]&&e[7],version:e[1],initial_height:e[2],space:e[8],loaded:e[14]==="complete",$$slots:{default:[au]},$$scope:{ctx:e}};return e[9]!==void 0&&(i.wrapper=e[9]),t=new ls({props:i}),Ot.push(()=>Ht(t,"wrapper",n)),{c(){ot(t.$$.fragment)},m(a,l){it(t,a,l),o=!0},p(a,l){const c={};l[0]&80&&(c.display=a[6]&&a[4]),l[0]&16&&(c.is_embed=a[4]),l[0]&384&&(c.info=!!a[8]&&a[7]),l[0]&2&&(c.version=a[1]),l[0]&4&&(c.initial_height=a[2]),l[0]&256&&(c.space=a[8]),l[0]&16384&&(c.loaded=a[14]==="complete"),l[0]&8388411|l[1]&65536&&(c.$$scope={dirty:l,ctx:a}),!r&&l[0]&512&&(r=!0,c.wrapper=a[9],xt(()=>r=!1)),t.$set(c)},i(a){o||($(t.$$.fragment,a),o=!0)},o(a){le(t.$$.fragment,a),o=!1},d(a){nt(t,a)}}}let lu=-1;function uu(){const e=Te({}),t=new Map,r=new IntersectionObserver(n=>{n.forEach(i=>{if(i.isIntersecting){let a=t.get(i.target);a!==void 0&&e.update(l=>({...l,[a]:!0}))}})});function o(n,i){t.set(i,n),r.observe(i)}return{register:o,subscribe:e.subscribe}}const Tr=uu();async function cu(e){const t=new DOMParser;if(e){const r=t.parseFromString(e,"text/html").head.firstChild;r&&document.head.append(r)}}function fu(e,t,r){let o,n;yr(e,eo,d=>r(22,o=d)),yr(e,Tr,d=>r(32,n=d)),Vl();let{autoscroll:i}=t,{version:a}=t,{initial_height:l}=t,{app_mode:c}=t,{is_embed:s}=t,{theme_mode:u="system"}=t,{control_page_title:f}=t,{container:_}=t,{info:m}=t,{eager:p}=t,g,{mount_css:w=Kn}=t,{client:P}=t,{upload_files:S}=t,{worker_proxy:h=void 0}=t;h&&(Xl(h),h.addEventListener("progress-update",d=>{r(15,Z=d.detail+"...")}));let{space:v}=t,{host:A}=t,{src:G}=t,z=lu++,re="pending",U,D=!1,oe=!1,C,Z=o("common.loading")+"...",R,j;async function Ge(d,F){if(F){let I=document.createElement("style");I.innerHTML=F,d.appendChild(I)}await w(C.root+"/theme.css",document.head),C.stylesheets&&await Promise.all(C.stylesheets.map(I=>{let ne=I.startsWith("http:")||I.startsWith("https:");return w(ne?I:C.root+"/"+I,document.head)}))}function at(d){let I=new URL(window.location.toString()).searchParams.get("__theme");return r(16,R=u||I||"system"),R==="dark"||R==="light"?ke(d,R):r(16,R=Ue(d)),R}function Ue(d){const F=I();window?.matchMedia("(prefers-color-scheme: dark)")?.addEventListener("change",I);function I(){let ne=window?.matchMedia?.("(prefers-color-scheme: dark)").matches?"dark":"light";return ke(d,ne),ne}return F}function ke(d,F){const I=s?d.parentElement:document.body,ne=s?d:d.parentElement;ne.style.background="var(--body-background-fill)",F==="dark"?I.classList.add("dark"):I.classList.remove("dark")}let Q={message:"",load_status:"pending",status:"sleeping",detail:"SLEEPING"},ce,Fe=!1;function b(d){r(13,Q=d)}wr(async()=>{window.__gradio_mode__!=="website"&&r(16,R=at(U));const d=window.__GRADIO_DEV__,F=window.__GRADIO__SERVER_PORT__;r(17,j=d==="dev"?`http://localhost:${typeof F=="number"?F:7860}`:A||v||G||location.origin),r(18,ce=await P(j,{status_callback:b,normalise_files:!1})),r(12,C=ce.config),window.__gradio_space__=C.space_id,r(13,Q={message:"",load_status:"complete",status:"running",detail:"RUNNING"}),await Ge(U,C.css),await cu(C.head),r(19,Fe=!0),window.__is_colab__=C.is_colab,C.dev_mode&&setTimeout(()=>{const{host:I}=new URL(j);let ne=new URL(`http://${I}/dev/reload`);g=new EventSource(ne),g.onmessage=async function(Qn){Qn.data==="CHANGE"&&(r(18,ce=await P(j,{status_callback:b,normalise_files:!1})),r(12,C=ce.config),window.__gradio_space__=C.space_id)}},200)}),eu("upload_files",S);let Ct,It;async function qn(){r(20,Ct=(await Nt(()=>import("./Blocks-9824d5aa.js").then(d=>d.B),["assets/Blocks-9824d5aa.js","assets/index-0526d562.js","assets/index-02e0d00d.css","assets/Button-89057c03.js","assets/Button-8a6aeb2c.css","assets/Blocks-f54bccc5.css"])).default)}async function zn(){r(21,It=(await Nt(()=>import("./Login-bbd6e215.js"),["assets/Login-bbd6e215.js","assets/Index-09f26e4b.js","assets/Index-3812b7f1.css","assets/Textbox-96e72fd5.js","assets/Button-89057c03.js","assets/index-0526d562.js","assets/index-02e0d00d.css","assets/Button-8a6aeb2c.css","assets/BlockTitle-49fa584d.js","assets/Info-586340e7.js","assets/Copy-1b5c0932.js","assets/Textbox-dde6f8cc.css","assets/Index-ab6a99fa.js","assets/Index-2853eb31.css","assets/Login-9c3cc0eb.css","assets/Example-6ded08d8.css"])).default)}function Xn(){C.auth_required?zn():qn()}const Wn={readable_error:{NO_APP_FILE:o("errors.no_app_file"),CONFIG_ERROR:o("errors.config_error"),BUILD_ERROR:o("errors.build_error"),RUNTIME_ERROR:o("errors.runtime_error"),PAUSED:o("errors.space_paused")},title(d){return encodeURIComponent(o("errors.space_not_working"))},description(d,F){return encodeURIComponent(`Hello, - -Firstly, thanks for creating this space! - -I noticed that the space isn't working correctly because there is ${this.readable_error[d]||"an error"}. - -It would be great if you could take a look at this because this space is being embedded on ${F}. - -Thanks!`)}};wr(async()=>{Tr.register(z,U)});function Zn(d){D=d,r(10,D)}function Yn(d){oe=d,r(11,oe)}function Jn(d){U=d,r(9,U)}return e.$$set=d=>{"autoscroll"in d&&r(0,i=d.autoscroll),"version"in d&&r(1,a=d.version),"initial_height"in d&&r(2,l=d.initial_height),"app_mode"in d&&r(3,c=d.app_mode),"is_embed"in d&&r(4,s=d.is_embed),"theme_mode"in d&&r(24,u=d.theme_mode),"control_page_title"in d&&r(5,f=d.control_page_title),"container"in d&&r(6,_=d.container),"info"in d&&r(7,m=d.info),"eager"in d&&r(25,p=d.eager),"mount_css"in d&&r(26,w=d.mount_css),"client"in d&&r(27,P=d.client),"upload_files"in d&&r(28,S=d.upload_files),"worker_proxy"in d&&r(29,h=d.worker_proxy),"space"in d&&r(8,v=d.space),"host"in d&&r(30,A=d.host),"src"in d&&r(31,G=d.src)},e.$$.update=()=>{e.$$.dirty[0]&4096&&C?.app_id&&C.app_id,e.$$.dirty[0]&9216&&r(14,re=!D&&Q.load_status!=="error"?"pending":!D&&Q.load_status==="error"?"error":Q.load_status),e.$$.dirty[0]&33558528|e.$$.dirty[1]&2&&C&&(p||n[z])&&Xn(),e.$$.dirty[0]&2560&&oe&&U.dispatchEvent(new CustomEvent("render",{bubbles:!0,cancelable:!1,composed:!0}))},[i,a,l,c,s,f,_,m,v,U,D,oe,C,Q,re,Z,R,j,ce,Fe,Ct,It,o,Wn,u,p,w,P,S,h,A,G,n,Zn,Yn,Jn]}class _u extends Wl{constructor(t){super(),Kl(this,t,fu,su,$l,{autoscroll:0,version:1,initial_height:2,app_mode:3,is_embed:4,theme_mode:24,control_page_title:5,container:6,info:7,eager:25,mount_css:26,client:27,upload_files:28,worker_proxy:29,space:8,host:30,src:31},null,[-1,-1])}}const vu=Object.freeze(Object.defineProperty({__proto__:null,default:_u},Symbol.toStringTag,{value:"Module"}));export{eo as $,vu as I,bs as L,zs as S,ai as a,ti as b,gu as c,mu as d,Vl as e,bu as g,du as i,pu as s,Te as w}; -//# sourceMappingURL=Index-37584f50.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/polynomial/laguerre.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/polynomial/laguerre.py deleted file mode 100644 index 925d4898ec07673f221937fff1082711a9851df9..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/polynomial/laguerre.py +++ /dev/null @@ -1,1651 +0,0 @@ -""" -================================================== -Laguerre Series (:mod:`numpy.polynomial.laguerre`) -================================================== - -This module provides a number of objects (mostly functions) useful for -dealing with Laguerre series, including a `Laguerre` class that -encapsulates the usual arithmetic operations. (General information -on how this module represents and works with such polynomials is in the -docstring for its "parent" sub-package, `numpy.polynomial`). - -Classes -------- -.. autosummary:: - :toctree: generated/ - - Laguerre - -Constants ---------- -.. autosummary:: - :toctree: generated/ - - lagdomain - lagzero - lagone - lagx - -Arithmetic ----------- -.. autosummary:: - :toctree: generated/ - - lagadd - lagsub - lagmulx - lagmul - lagdiv - lagpow - lagval - lagval2d - lagval3d - laggrid2d - laggrid3d - -Calculus --------- -.. autosummary:: - :toctree: generated/ - - lagder - lagint - -Misc Functions --------------- -.. autosummary:: - :toctree: generated/ - - lagfromroots - lagroots - lagvander - lagvander2d - lagvander3d - laggauss - lagweight - lagcompanion - lagfit - lagtrim - lagline - lag2poly - poly2lag - -See also --------- -`numpy.polynomial` - -""" -import numpy as np -import numpy.linalg as la -from numpy.core.multiarray import normalize_axis_index - -from . import polyutils as pu -from ._polybase import ABCPolyBase - -__all__ = [ - 'lagzero', 'lagone', 'lagx', 'lagdomain', 'lagline', 'lagadd', - 'lagsub', 'lagmulx', 'lagmul', 'lagdiv', 'lagpow', 'lagval', 'lagder', - 'lagint', 'lag2poly', 'poly2lag', 'lagfromroots', 'lagvander', - 'lagfit', 'lagtrim', 'lagroots', 'Laguerre', 'lagval2d', 'lagval3d', - 'laggrid2d', 'laggrid3d', 'lagvander2d', 'lagvander3d', 'lagcompanion', - 'laggauss', 'lagweight'] - -lagtrim = pu.trimcoef - - -def poly2lag(pol): - """ - poly2lag(pol) - - Convert a polynomial to a Laguerre series. - - Convert an array representing the coefficients of a polynomial (relative - to the "standard" basis) ordered from lowest degree to highest, to an - array of the coefficients of the equivalent Laguerre series, ordered - from lowest to highest degree. - - Parameters - ---------- - pol : array_like - 1-D array containing the polynomial coefficients - - Returns - ------- - c : ndarray - 1-D array containing the coefficients of the equivalent Laguerre - series. - - See Also - -------- - lag2poly - - Notes - ----- - The easy way to do conversions between polynomial basis sets - is to use the convert method of a class instance. - - Examples - -------- - >>> from numpy.polynomial.laguerre import poly2lag - >>> poly2lag(np.arange(4)) - array([ 23., -63., 58., -18.]) - - """ - [pol] = pu.as_series([pol]) - res = 0 - for p in pol[::-1]: - res = lagadd(lagmulx(res), p) - return res - - -def lag2poly(c): - """ - Convert a Laguerre series to a polynomial. - - Convert an array representing the coefficients of a Laguerre series, - ordered from lowest degree to highest, to an array of the coefficients - of the equivalent polynomial (relative to the "standard" basis) ordered - from lowest to highest degree. - - Parameters - ---------- - c : array_like - 1-D array containing the Laguerre series coefficients, ordered - from lowest order term to highest. - - Returns - ------- - pol : ndarray - 1-D array containing the coefficients of the equivalent polynomial - (relative to the "standard" basis) ordered from lowest order term - to highest. - - See Also - -------- - poly2lag - - Notes - ----- - The easy way to do conversions between polynomial basis sets - is to use the convert method of a class instance. - - Examples - -------- - >>> from numpy.polynomial.laguerre import lag2poly - >>> lag2poly([ 23., -63., 58., -18.]) - array([0., 1., 2., 3.]) - - """ - from .polynomial import polyadd, polysub, polymulx - - [c] = pu.as_series([c]) - n = len(c) - if n == 1: - return c - else: - c0 = c[-2] - c1 = c[-1] - # i is the current degree of c1 - for i in range(n - 1, 1, -1): - tmp = c0 - c0 = polysub(c[i - 2], (c1*(i - 1))/i) - c1 = polyadd(tmp, polysub((2*i - 1)*c1, polymulx(c1))/i) - return polyadd(c0, polysub(c1, polymulx(c1))) - -# -# These are constant arrays are of integer type so as to be compatible -# with the widest range of other types, such as Decimal. -# - -# Laguerre -lagdomain = np.array([0, 1]) - -# Laguerre coefficients representing zero. -lagzero = np.array([0]) - -# Laguerre coefficients representing one. -lagone = np.array([1]) - -# Laguerre coefficients representing the identity x. -lagx = np.array([1, -1]) - - -def lagline(off, scl): - """ - Laguerre series whose graph is a straight line. - - Parameters - ---------- - off, scl : scalars - The specified line is given by ``off + scl*x``. - - Returns - ------- - y : ndarray - This module's representation of the Laguerre series for - ``off + scl*x``. - - See Also - -------- - numpy.polynomial.polynomial.polyline - numpy.polynomial.chebyshev.chebline - numpy.polynomial.legendre.legline - numpy.polynomial.hermite.hermline - numpy.polynomial.hermite_e.hermeline - - Examples - -------- - >>> from numpy.polynomial.laguerre import lagline, lagval - >>> lagval(0,lagline(3, 2)) - 3.0 - >>> lagval(1,lagline(3, 2)) - 5.0 - - """ - if scl != 0: - return np.array([off + scl, -scl]) - else: - return np.array([off]) - - -def lagfromroots(roots): - """ - Generate a Laguerre series with given roots. - - The function returns the coefficients of the polynomial - - .. math:: p(x) = (x - r_0) * (x - r_1) * ... * (x - r_n), - - in Laguerre form, where the `r_n` are the roots specified in `roots`. - If a zero has multiplicity n, then it must appear in `roots` n times. - For instance, if 2 is a root of multiplicity three and 3 is a root of - multiplicity 2, then `roots` looks something like [2, 2, 2, 3, 3]. The - roots can appear in any order. - - If the returned coefficients are `c`, then - - .. math:: p(x) = c_0 + c_1 * L_1(x) + ... + c_n * L_n(x) - - The coefficient of the last term is not generally 1 for monic - polynomials in Laguerre form. - - Parameters - ---------- - roots : array_like - Sequence containing the roots. - - Returns - ------- - out : ndarray - 1-D array of coefficients. If all roots are real then `out` is a - real array, if some of the roots are complex, then `out` is complex - even if all the coefficients in the result are real (see Examples - below). - - See Also - -------- - numpy.polynomial.polynomial.polyfromroots - numpy.polynomial.legendre.legfromroots - numpy.polynomial.chebyshev.chebfromroots - numpy.polynomial.hermite.hermfromroots - numpy.polynomial.hermite_e.hermefromroots - - Examples - -------- - >>> from numpy.polynomial.laguerre import lagfromroots, lagval - >>> coef = lagfromroots((-1, 0, 1)) - >>> lagval((-1, 0, 1), coef) - array([0., 0., 0.]) - >>> coef = lagfromroots((-1j, 1j)) - >>> lagval((-1j, 1j), coef) - array([0.+0.j, 0.+0.j]) - - """ - return pu._fromroots(lagline, lagmul, roots) - - -def lagadd(c1, c2): - """ - Add one Laguerre series to another. - - Returns the sum of two Laguerre series `c1` + `c2`. The arguments - are sequences of coefficients ordered from lowest order term to - highest, i.e., [1,2,3] represents the series ``P_0 + 2*P_1 + 3*P_2``. - - Parameters - ---------- - c1, c2 : array_like - 1-D arrays of Laguerre series coefficients ordered from low to - high. - - Returns - ------- - out : ndarray - Array representing the Laguerre series of their sum. - - See Also - -------- - lagsub, lagmulx, lagmul, lagdiv, lagpow - - Notes - ----- - Unlike multiplication, division, etc., the sum of two Laguerre series - is a Laguerre series (without having to "reproject" the result onto - the basis set) so addition, just like that of "standard" polynomials, - is simply "component-wise." - - Examples - -------- - >>> from numpy.polynomial.laguerre import lagadd - >>> lagadd([1, 2, 3], [1, 2, 3, 4]) - array([2., 4., 6., 4.]) - - - """ - return pu._add(c1, c2) - - -def lagsub(c1, c2): - """ - Subtract one Laguerre series from another. - - Returns the difference of two Laguerre series `c1` - `c2`. The - sequences of coefficients are from lowest order term to highest, i.e., - [1,2,3] represents the series ``P_0 + 2*P_1 + 3*P_2``. - - Parameters - ---------- - c1, c2 : array_like - 1-D arrays of Laguerre series coefficients ordered from low to - high. - - Returns - ------- - out : ndarray - Of Laguerre series coefficients representing their difference. - - See Also - -------- - lagadd, lagmulx, lagmul, lagdiv, lagpow - - Notes - ----- - Unlike multiplication, division, etc., the difference of two Laguerre - series is a Laguerre series (without having to "reproject" the result - onto the basis set) so subtraction, just like that of "standard" - polynomials, is simply "component-wise." - - Examples - -------- - >>> from numpy.polynomial.laguerre import lagsub - >>> lagsub([1, 2, 3, 4], [1, 2, 3]) - array([0., 0., 0., 4.]) - - """ - return pu._sub(c1, c2) - - -def lagmulx(c): - """Multiply a Laguerre series by x. - - Multiply the Laguerre series `c` by x, where x is the independent - variable. - - - Parameters - ---------- - c : array_like - 1-D array of Laguerre series coefficients ordered from low to - high. - - Returns - ------- - out : ndarray - Array representing the result of the multiplication. - - See Also - -------- - lagadd, lagsub, lagmul, lagdiv, lagpow - - Notes - ----- - The multiplication uses the recursion relationship for Laguerre - polynomials in the form - - .. math:: - - xP_i(x) = (-(i + 1)*P_{i + 1}(x) + (2i + 1)P_{i}(x) - iP_{i - 1}(x)) - - Examples - -------- - >>> from numpy.polynomial.laguerre import lagmulx - >>> lagmulx([1, 2, 3]) - array([-1., -1., 11., -9.]) - - """ - # c is a trimmed copy - [c] = pu.as_series([c]) - # The zero series needs special treatment - if len(c) == 1 and c[0] == 0: - return c - - prd = np.empty(len(c) + 1, dtype=c.dtype) - prd[0] = c[0] - prd[1] = -c[0] - for i in range(1, len(c)): - prd[i + 1] = -c[i]*(i + 1) - prd[i] += c[i]*(2*i + 1) - prd[i - 1] -= c[i]*i - return prd - - -def lagmul(c1, c2): - """ - Multiply one Laguerre series by another. - - Returns the product of two Laguerre series `c1` * `c2`. The arguments - are sequences of coefficients, from lowest order "term" to highest, - e.g., [1,2,3] represents the series ``P_0 + 2*P_1 + 3*P_2``. - - Parameters - ---------- - c1, c2 : array_like - 1-D arrays of Laguerre series coefficients ordered from low to - high. - - Returns - ------- - out : ndarray - Of Laguerre series coefficients representing their product. - - See Also - -------- - lagadd, lagsub, lagmulx, lagdiv, lagpow - - Notes - ----- - In general, the (polynomial) product of two C-series results in terms - that are not in the Laguerre polynomial basis set. Thus, to express - the product as a Laguerre series, it is necessary to "reproject" the - product onto said basis set, which may produce "unintuitive" (but - correct) results; see Examples section below. - - Examples - -------- - >>> from numpy.polynomial.laguerre import lagmul - >>> lagmul([1, 2, 3], [0, 1, 2]) - array([ 8., -13., 38., -51., 36.]) - - """ - # s1, s2 are trimmed copies - [c1, c2] = pu.as_series([c1, c2]) - - if len(c1) > len(c2): - c = c2 - xs = c1 - else: - c = c1 - xs = c2 - - if len(c) == 1: - c0 = c[0]*xs - c1 = 0 - elif len(c) == 2: - c0 = c[0]*xs - c1 = c[1]*xs - else: - nd = len(c) - c0 = c[-2]*xs - c1 = c[-1]*xs - for i in range(3, len(c) + 1): - tmp = c0 - nd = nd - 1 - c0 = lagsub(c[-i]*xs, (c1*(nd - 1))/nd) - c1 = lagadd(tmp, lagsub((2*nd - 1)*c1, lagmulx(c1))/nd) - return lagadd(c0, lagsub(c1, lagmulx(c1))) - - -def lagdiv(c1, c2): - """ - Divide one Laguerre series by another. - - Returns the quotient-with-remainder of two Laguerre series - `c1` / `c2`. The arguments are sequences of coefficients from lowest - order "term" to highest, e.g., [1,2,3] represents the series - ``P_0 + 2*P_1 + 3*P_2``. - - Parameters - ---------- - c1, c2 : array_like - 1-D arrays of Laguerre series coefficients ordered from low to - high. - - Returns - ------- - [quo, rem] : ndarrays - Of Laguerre series coefficients representing the quotient and - remainder. - - See Also - -------- - lagadd, lagsub, lagmulx, lagmul, lagpow - - Notes - ----- - In general, the (polynomial) division of one Laguerre series by another - results in quotient and remainder terms that are not in the Laguerre - polynomial basis set. Thus, to express these results as a Laguerre - series, it is necessary to "reproject" the results onto the Laguerre - basis set, which may produce "unintuitive" (but correct) results; see - Examples section below. - - Examples - -------- - >>> from numpy.polynomial.laguerre import lagdiv - >>> lagdiv([ 8., -13., 38., -51., 36.], [0, 1, 2]) - (array([1., 2., 3.]), array([0.])) - >>> lagdiv([ 9., -12., 38., -51., 36.], [0, 1, 2]) - (array([1., 2., 3.]), array([1., 1.])) - - """ - return pu._div(lagmul, c1, c2) - - -def lagpow(c, pow, maxpower=16): - """Raise a Laguerre series to a power. - - Returns the Laguerre series `c` raised to the power `pow`. The - argument `c` is a sequence of coefficients ordered from low to high. - i.e., [1,2,3] is the series ``P_0 + 2*P_1 + 3*P_2.`` - - Parameters - ---------- - c : array_like - 1-D array of Laguerre series coefficients ordered from low to - high. - pow : integer - Power to which the series will be raised - maxpower : integer, optional - Maximum power allowed. This is mainly to limit growth of the series - to unmanageable size. Default is 16 - - Returns - ------- - coef : ndarray - Laguerre series of power. - - See Also - -------- - lagadd, lagsub, lagmulx, lagmul, lagdiv - - Examples - -------- - >>> from numpy.polynomial.laguerre import lagpow - >>> lagpow([1, 2, 3], 2) - array([ 14., -16., 56., -72., 54.]) - - """ - return pu._pow(lagmul, c, pow, maxpower) - - -def lagder(c, m=1, scl=1, axis=0): - """ - Differentiate a Laguerre series. - - Returns the Laguerre series coefficients `c` differentiated `m` times - along `axis`. At each iteration the result is multiplied by `scl` (the - scaling factor is for use in a linear change of variable). The argument - `c` is an array of coefficients from low to high degree along each - axis, e.g., [1,2,3] represents the series ``1*L_0 + 2*L_1 + 3*L_2`` - while [[1,2],[1,2]] represents ``1*L_0(x)*L_0(y) + 1*L_1(x)*L_0(y) + - 2*L_0(x)*L_1(y) + 2*L_1(x)*L_1(y)`` if axis=0 is ``x`` and axis=1 is - ``y``. - - Parameters - ---------- - c : array_like - Array of Laguerre series coefficients. If `c` is multidimensional - the different axis correspond to different variables with the - degree in each axis given by the corresponding index. - m : int, optional - Number of derivatives taken, must be non-negative. (Default: 1) - scl : scalar, optional - Each differentiation is multiplied by `scl`. The end result is - multiplication by ``scl**m``. This is for use in a linear change of - variable. (Default: 1) - axis : int, optional - Axis over which the derivative is taken. (Default: 0). - - .. versionadded:: 1.7.0 - - Returns - ------- - der : ndarray - Laguerre series of the derivative. - - See Also - -------- - lagint - - Notes - ----- - In general, the result of differentiating a Laguerre series does not - resemble the same operation on a power series. Thus the result of this - function may be "unintuitive," albeit correct; see Examples section - below. - - Examples - -------- - >>> from numpy.polynomial.laguerre import lagder - >>> lagder([ 1., 1., 1., -3.]) - array([1., 2., 3.]) - >>> lagder([ 1., 0., 0., -4., 3.], m=2) - array([1., 2., 3.]) - - """ - c = np.array(c, ndmin=1, copy=True) - if c.dtype.char in '?bBhHiIlLqQpP': - c = c.astype(np.double) - - cnt = pu._deprecate_as_int(m, "the order of derivation") - iaxis = pu._deprecate_as_int(axis, "the axis") - if cnt < 0: - raise ValueError("The order of derivation must be non-negative") - iaxis = normalize_axis_index(iaxis, c.ndim) - - if cnt == 0: - return c - - c = np.moveaxis(c, iaxis, 0) - n = len(c) - if cnt >= n: - c = c[:1]*0 - else: - for i in range(cnt): - n = n - 1 - c *= scl - der = np.empty((n,) + c.shape[1:], dtype=c.dtype) - for j in range(n, 1, -1): - der[j - 1] = -c[j] - c[j - 1] += c[j] - der[0] = -c[1] - c = der - c = np.moveaxis(c, 0, iaxis) - return c - - -def lagint(c, m=1, k=[], lbnd=0, scl=1, axis=0): - """ - Integrate a Laguerre series. - - Returns the Laguerre series coefficients `c` integrated `m` times from - `lbnd` along `axis`. At each iteration the resulting series is - **multiplied** by `scl` and an integration constant, `k`, is added. - The scaling factor is for use in a linear change of variable. ("Buyer - beware": note that, depending on what one is doing, one may want `scl` - to be the reciprocal of what one might expect; for more information, - see the Notes section below.) The argument `c` is an array of - coefficients from low to high degree along each axis, e.g., [1,2,3] - represents the series ``L_0 + 2*L_1 + 3*L_2`` while [[1,2],[1,2]] - represents ``1*L_0(x)*L_0(y) + 1*L_1(x)*L_0(y) + 2*L_0(x)*L_1(y) + - 2*L_1(x)*L_1(y)`` if axis=0 is ``x`` and axis=1 is ``y``. - - - Parameters - ---------- - c : array_like - Array of Laguerre series coefficients. If `c` is multidimensional - the different axis correspond to different variables with the - degree in each axis given by the corresponding index. - m : int, optional - Order of integration, must be positive. (Default: 1) - k : {[], list, scalar}, optional - Integration constant(s). The value of the first integral at - ``lbnd`` is the first value in the list, the value of the second - integral at ``lbnd`` is the second value, etc. If ``k == []`` (the - default), all constants are set to zero. If ``m == 1``, a single - scalar can be given instead of a list. - lbnd : scalar, optional - The lower bound of the integral. (Default: 0) - scl : scalar, optional - Following each integration the result is *multiplied* by `scl` - before the integration constant is added. (Default: 1) - axis : int, optional - Axis over which the integral is taken. (Default: 0). - - .. versionadded:: 1.7.0 - - Returns - ------- - S : ndarray - Laguerre series coefficients of the integral. - - Raises - ------ - ValueError - If ``m < 0``, ``len(k) > m``, ``np.ndim(lbnd) != 0``, or - ``np.ndim(scl) != 0``. - - See Also - -------- - lagder - - Notes - ----- - Note that the result of each integration is *multiplied* by `scl`. - Why is this important to note? Say one is making a linear change of - variable :math:`u = ax + b` in an integral relative to `x`. Then - :math:`dx = du/a`, so one will need to set `scl` equal to - :math:`1/a` - perhaps not what one would have first thought. - - Also note that, in general, the result of integrating a C-series needs - to be "reprojected" onto the C-series basis set. Thus, typically, - the result of this function is "unintuitive," albeit correct; see - Examples section below. - - Examples - -------- - >>> from numpy.polynomial.laguerre import lagint - >>> lagint([1,2,3]) - array([ 1., 1., 1., -3.]) - >>> lagint([1,2,3], m=2) - array([ 1., 0., 0., -4., 3.]) - >>> lagint([1,2,3], k=1) - array([ 2., 1., 1., -3.]) - >>> lagint([1,2,3], lbnd=-1) - array([11.5, 1. , 1. , -3. ]) - >>> lagint([1,2], m=2, k=[1,2], lbnd=-1) - array([ 11.16666667, -5. , -3. , 2. ]) # may vary - - """ - c = np.array(c, ndmin=1, copy=True) - if c.dtype.char in '?bBhHiIlLqQpP': - c = c.astype(np.double) - if not np.iterable(k): - k = [k] - cnt = pu._deprecate_as_int(m, "the order of integration") - iaxis = pu._deprecate_as_int(axis, "the axis") - if cnt < 0: - raise ValueError("The order of integration must be non-negative") - if len(k) > cnt: - raise ValueError("Too many integration constants") - if np.ndim(lbnd) != 0: - raise ValueError("lbnd must be a scalar.") - if np.ndim(scl) != 0: - raise ValueError("scl must be a scalar.") - iaxis = normalize_axis_index(iaxis, c.ndim) - - if cnt == 0: - return c - - c = np.moveaxis(c, iaxis, 0) - k = list(k) + [0]*(cnt - len(k)) - for i in range(cnt): - n = len(c) - c *= scl - if n == 1 and np.all(c[0] == 0): - c[0] += k[i] - else: - tmp = np.empty((n + 1,) + c.shape[1:], dtype=c.dtype) - tmp[0] = c[0] - tmp[1] = -c[0] - for j in range(1, n): - tmp[j] += c[j] - tmp[j + 1] = -c[j] - tmp[0] += k[i] - lagval(lbnd, tmp) - c = tmp - c = np.moveaxis(c, 0, iaxis) - return c - - -def lagval(x, c, tensor=True): - """ - Evaluate a Laguerre series at points x. - - If `c` is of length `n + 1`, this function returns the value: - - .. math:: p(x) = c_0 * L_0(x) + c_1 * L_1(x) + ... + c_n * L_n(x) - - The parameter `x` is converted to an array only if it is a tuple or a - list, otherwise it is treated as a scalar. In either case, either `x` - or its elements must support multiplication and addition both with - themselves and with the elements of `c`. - - If `c` is a 1-D array, then `p(x)` will have the same shape as `x`. If - `c` is multidimensional, then the shape of the result depends on the - value of `tensor`. If `tensor` is true the shape will be c.shape[1:] + - x.shape. If `tensor` is false the shape will be c.shape[1:]. Note that - scalars have shape (,). - - Trailing zeros in the coefficients will be used in the evaluation, so - they should be avoided if efficiency is a concern. - - Parameters - ---------- - x : array_like, compatible object - If `x` is a list or tuple, it is converted to an ndarray, otherwise - it is left unchanged and treated as a scalar. In either case, `x` - or its elements must support addition and multiplication with - themselves and with the elements of `c`. - c : array_like - Array of coefficients ordered so that the coefficients for terms of - degree n are contained in c[n]. If `c` is multidimensional the - remaining indices enumerate multiple polynomials. In the two - dimensional case the coefficients may be thought of as stored in - the columns of `c`. - tensor : boolean, optional - If True, the shape of the coefficient array is extended with ones - on the right, one for each dimension of `x`. Scalars have dimension 0 - for this action. The result is that every column of coefficients in - `c` is evaluated for every element of `x`. If False, `x` is broadcast - over the columns of `c` for the evaluation. This keyword is useful - when `c` is multidimensional. The default value is True. - - .. versionadded:: 1.7.0 - - Returns - ------- - values : ndarray, algebra_like - The shape of the return value is described above. - - See Also - -------- - lagval2d, laggrid2d, lagval3d, laggrid3d - - Notes - ----- - The evaluation uses Clenshaw recursion, aka synthetic division. - - Examples - -------- - >>> from numpy.polynomial.laguerre import lagval - >>> coef = [1,2,3] - >>> lagval(1, coef) - -0.5 - >>> lagval([[1,2],[3,4]], coef) - array([[-0.5, -4. ], - [-4.5, -2. ]]) - - """ - c = np.array(c, ndmin=1, copy=False) - if c.dtype.char in '?bBhHiIlLqQpP': - c = c.astype(np.double) - if isinstance(x, (tuple, list)): - x = np.asarray(x) - if isinstance(x, np.ndarray) and tensor: - c = c.reshape(c.shape + (1,)*x.ndim) - - if len(c) == 1: - c0 = c[0] - c1 = 0 - elif len(c) == 2: - c0 = c[0] - c1 = c[1] - else: - nd = len(c) - c0 = c[-2] - c1 = c[-1] - for i in range(3, len(c) + 1): - tmp = c0 - nd = nd - 1 - c0 = c[-i] - (c1*(nd - 1))/nd - c1 = tmp + (c1*((2*nd - 1) - x))/nd - return c0 + c1*(1 - x) - - -def lagval2d(x, y, c): - """ - Evaluate a 2-D Laguerre series at points (x, y). - - This function returns the values: - - .. math:: p(x,y) = \\sum_{i,j} c_{i,j} * L_i(x) * L_j(y) - - The parameters `x` and `y` are converted to arrays only if they are - tuples or a lists, otherwise they are treated as a scalars and they - must have the same shape after conversion. In either case, either `x` - and `y` or their elements must support multiplication and addition both - with themselves and with the elements of `c`. - - If `c` is a 1-D array a one is implicitly appended to its shape to make - it 2-D. The shape of the result will be c.shape[2:] + x.shape. - - Parameters - ---------- - x, y : array_like, compatible objects - The two dimensional series is evaluated at the points `(x, y)`, - where `x` and `y` must have the same shape. If `x` or `y` is a list - or tuple, it is first converted to an ndarray, otherwise it is left - unchanged and if it isn't an ndarray it is treated as a scalar. - c : array_like - Array of coefficients ordered so that the coefficient of the term - of multi-degree i,j is contained in ``c[i,j]``. If `c` has - dimension greater than two the remaining indices enumerate multiple - sets of coefficients. - - Returns - ------- - values : ndarray, compatible object - The values of the two dimensional polynomial at points formed with - pairs of corresponding values from `x` and `y`. - - See Also - -------- - lagval, laggrid2d, lagval3d, laggrid3d - - Notes - ----- - - .. versionadded:: 1.7.0 - - """ - return pu._valnd(lagval, c, x, y) - - -def laggrid2d(x, y, c): - """ - Evaluate a 2-D Laguerre series on the Cartesian product of x and y. - - This function returns the values: - - .. math:: p(a,b) = \\sum_{i,j} c_{i,j} * L_i(a) * L_j(b) - - where the points `(a, b)` consist of all pairs formed by taking - `a` from `x` and `b` from `y`. The resulting points form a grid with - `x` in the first dimension and `y` in the second. - - The parameters `x` and `y` are converted to arrays only if they are - tuples or a lists, otherwise they are treated as a scalars. In either - case, either `x` and `y` or their elements must support multiplication - and addition both with themselves and with the elements of `c`. - - If `c` has fewer than two dimensions, ones are implicitly appended to - its shape to make it 2-D. The shape of the result will be c.shape[2:] + - x.shape + y.shape. - - Parameters - ---------- - x, y : array_like, compatible objects - The two dimensional series is evaluated at the points in the - Cartesian product of `x` and `y`. If `x` or `y` is a list or - tuple, it is first converted to an ndarray, otherwise it is left - unchanged and, if it isn't an ndarray, it is treated as a scalar. - c : array_like - Array of coefficients ordered so that the coefficient of the term of - multi-degree i,j is contained in `c[i,j]`. If `c` has dimension - greater than two the remaining indices enumerate multiple sets of - coefficients. - - Returns - ------- - values : ndarray, compatible object - The values of the two dimensional Chebyshev series at points in the - Cartesian product of `x` and `y`. - - See Also - -------- - lagval, lagval2d, lagval3d, laggrid3d - - Notes - ----- - - .. versionadded:: 1.7.0 - - """ - return pu._gridnd(lagval, c, x, y) - - -def lagval3d(x, y, z, c): - """ - Evaluate a 3-D Laguerre series at points (x, y, z). - - This function returns the values: - - .. math:: p(x,y,z) = \\sum_{i,j,k} c_{i,j,k} * L_i(x) * L_j(y) * L_k(z) - - The parameters `x`, `y`, and `z` are converted to arrays only if - they are tuples or a lists, otherwise they are treated as a scalars and - they must have the same shape after conversion. In either case, either - `x`, `y`, and `z` or their elements must support multiplication and - addition both with themselves and with the elements of `c`. - - If `c` has fewer than 3 dimensions, ones are implicitly appended to its - shape to make it 3-D. The shape of the result will be c.shape[3:] + - x.shape. - - Parameters - ---------- - x, y, z : array_like, compatible object - The three dimensional series is evaluated at the points - `(x, y, z)`, where `x`, `y`, and `z` must have the same shape. If - any of `x`, `y`, or `z` is a list or tuple, it is first converted - to an ndarray, otherwise it is left unchanged and if it isn't an - ndarray it is treated as a scalar. - c : array_like - Array of coefficients ordered so that the coefficient of the term of - multi-degree i,j,k is contained in ``c[i,j,k]``. If `c` has dimension - greater than 3 the remaining indices enumerate multiple sets of - coefficients. - - Returns - ------- - values : ndarray, compatible object - The values of the multidimensional polynomial on points formed with - triples of corresponding values from `x`, `y`, and `z`. - - See Also - -------- - lagval, lagval2d, laggrid2d, laggrid3d - - Notes - ----- - - .. versionadded:: 1.7.0 - - """ - return pu._valnd(lagval, c, x, y, z) - - -def laggrid3d(x, y, z, c): - """ - Evaluate a 3-D Laguerre series on the Cartesian product of x, y, and z. - - This function returns the values: - - .. math:: p(a,b,c) = \\sum_{i,j,k} c_{i,j,k} * L_i(a) * L_j(b) * L_k(c) - - where the points `(a, b, c)` consist of all triples formed by taking - `a` from `x`, `b` from `y`, and `c` from `z`. The resulting points form - a grid with `x` in the first dimension, `y` in the second, and `z` in - the third. - - The parameters `x`, `y`, and `z` are converted to arrays only if they - are tuples or a lists, otherwise they are treated as a scalars. In - either case, either `x`, `y`, and `z` or their elements must support - multiplication and addition both with themselves and with the elements - of `c`. - - If `c` has fewer than three dimensions, ones are implicitly appended to - its shape to make it 3-D. The shape of the result will be c.shape[3:] + - x.shape + y.shape + z.shape. - - Parameters - ---------- - x, y, z : array_like, compatible objects - The three dimensional series is evaluated at the points in the - Cartesian product of `x`, `y`, and `z`. If `x`,`y`, or `z` is a - list or tuple, it is first converted to an ndarray, otherwise it is - left unchanged and, if it isn't an ndarray, it is treated as a - scalar. - c : array_like - Array of coefficients ordered so that the coefficients for terms of - degree i,j are contained in ``c[i,j]``. If `c` has dimension - greater than two the remaining indices enumerate multiple sets of - coefficients. - - Returns - ------- - values : ndarray, compatible object - The values of the two dimensional polynomial at points in the Cartesian - product of `x` and `y`. - - See Also - -------- - lagval, lagval2d, laggrid2d, lagval3d - - Notes - ----- - - .. versionadded:: 1.7.0 - - """ - return pu._gridnd(lagval, c, x, y, z) - - -def lagvander(x, deg): - """Pseudo-Vandermonde matrix of given degree. - - Returns the pseudo-Vandermonde matrix of degree `deg` and sample points - `x`. The pseudo-Vandermonde matrix is defined by - - .. math:: V[..., i] = L_i(x) - - where `0 <= i <= deg`. The leading indices of `V` index the elements of - `x` and the last index is the degree of the Laguerre polynomial. - - If `c` is a 1-D array of coefficients of length `n + 1` and `V` is the - array ``V = lagvander(x, n)``, then ``np.dot(V, c)`` and - ``lagval(x, c)`` are the same up to roundoff. This equivalence is - useful both for least squares fitting and for the evaluation of a large - number of Laguerre series of the same degree and sample points. - - Parameters - ---------- - x : array_like - Array of points. The dtype is converted to float64 or complex128 - depending on whether any of the elements are complex. If `x` is - scalar it is converted to a 1-D array. - deg : int - Degree of the resulting matrix. - - Returns - ------- - vander : ndarray - The pseudo-Vandermonde matrix. The shape of the returned matrix is - ``x.shape + (deg + 1,)``, where The last index is the degree of the - corresponding Laguerre polynomial. The dtype will be the same as - the converted `x`. - - Examples - -------- - >>> from numpy.polynomial.laguerre import lagvander - >>> x = np.array([0, 1, 2]) - >>> lagvander(x, 3) - array([[ 1. , 1. , 1. , 1. ], - [ 1. , 0. , -0.5 , -0.66666667], - [ 1. , -1. , -1. , -0.33333333]]) - - """ - ideg = pu._deprecate_as_int(deg, "deg") - if ideg < 0: - raise ValueError("deg must be non-negative") - - x = np.array(x, copy=False, ndmin=1) + 0.0 - dims = (ideg + 1,) + x.shape - dtyp = x.dtype - v = np.empty(dims, dtype=dtyp) - v[0] = x*0 + 1 - if ideg > 0: - v[1] = 1 - x - for i in range(2, ideg + 1): - v[i] = (v[i-1]*(2*i - 1 - x) - v[i-2]*(i - 1))/i - return np.moveaxis(v, 0, -1) - - -def lagvander2d(x, y, deg): - """Pseudo-Vandermonde matrix of given degrees. - - Returns the pseudo-Vandermonde matrix of degrees `deg` and sample - points `(x, y)`. The pseudo-Vandermonde matrix is defined by - - .. math:: V[..., (deg[1] + 1)*i + j] = L_i(x) * L_j(y), - - where `0 <= i <= deg[0]` and `0 <= j <= deg[1]`. The leading indices of - `V` index the points `(x, y)` and the last index encodes the degrees of - the Laguerre polynomials. - - If ``V = lagvander2d(x, y, [xdeg, ydeg])``, then the columns of `V` - correspond to the elements of a 2-D coefficient array `c` of shape - (xdeg + 1, ydeg + 1) in the order - - .. math:: c_{00}, c_{01}, c_{02} ... , c_{10}, c_{11}, c_{12} ... - - and ``np.dot(V, c.flat)`` and ``lagval2d(x, y, c)`` will be the same - up to roundoff. This equivalence is useful both for least squares - fitting and for the evaluation of a large number of 2-D Laguerre - series of the same degrees and sample points. - - Parameters - ---------- - x, y : array_like - Arrays of point coordinates, all of the same shape. The dtypes - will be converted to either float64 or complex128 depending on - whether any of the elements are complex. Scalars are converted to - 1-D arrays. - deg : list of ints - List of maximum degrees of the form [x_deg, y_deg]. - - Returns - ------- - vander2d : ndarray - The shape of the returned matrix is ``x.shape + (order,)``, where - :math:`order = (deg[0]+1)*(deg[1]+1)`. The dtype will be the same - as the converted `x` and `y`. - - See Also - -------- - lagvander, lagvander3d, lagval2d, lagval3d - - Notes - ----- - - .. versionadded:: 1.7.0 - - """ - return pu._vander_nd_flat((lagvander, lagvander), (x, y), deg) - - -def lagvander3d(x, y, z, deg): - """Pseudo-Vandermonde matrix of given degrees. - - Returns the pseudo-Vandermonde matrix of degrees `deg` and sample - points `(x, y, z)`. If `l, m, n` are the given degrees in `x, y, z`, - then The pseudo-Vandermonde matrix is defined by - - .. math:: V[..., (m+1)(n+1)i + (n+1)j + k] = L_i(x)*L_j(y)*L_k(z), - - where `0 <= i <= l`, `0 <= j <= m`, and `0 <= j <= n`. The leading - indices of `V` index the points `(x, y, z)` and the last index encodes - the degrees of the Laguerre polynomials. - - If ``V = lagvander3d(x, y, z, [xdeg, ydeg, zdeg])``, then the columns - of `V` correspond to the elements of a 3-D coefficient array `c` of - shape (xdeg + 1, ydeg + 1, zdeg + 1) in the order - - .. math:: c_{000}, c_{001}, c_{002},... , c_{010}, c_{011}, c_{012},... - - and ``np.dot(V, c.flat)`` and ``lagval3d(x, y, z, c)`` will be the - same up to roundoff. This equivalence is useful both for least squares - fitting and for the evaluation of a large number of 3-D Laguerre - series of the same degrees and sample points. - - Parameters - ---------- - x, y, z : array_like - Arrays of point coordinates, all of the same shape. The dtypes will - be converted to either float64 or complex128 depending on whether - any of the elements are complex. Scalars are converted to 1-D - arrays. - deg : list of ints - List of maximum degrees of the form [x_deg, y_deg, z_deg]. - - Returns - ------- - vander3d : ndarray - The shape of the returned matrix is ``x.shape + (order,)``, where - :math:`order = (deg[0]+1)*(deg[1]+1)*(deg[2]+1)`. The dtype will - be the same as the converted `x`, `y`, and `z`. - - See Also - -------- - lagvander, lagvander3d, lagval2d, lagval3d - - Notes - ----- - - .. versionadded:: 1.7.0 - - """ - return pu._vander_nd_flat((lagvander, lagvander, lagvander), (x, y, z), deg) - - -def lagfit(x, y, deg, rcond=None, full=False, w=None): - """ - Least squares fit of Laguerre series to data. - - Return the coefficients of a Laguerre series of degree `deg` that is the - least squares fit to the data values `y` given at points `x`. If `y` is - 1-D the returned coefficients will also be 1-D. If `y` is 2-D multiple - fits are done, one for each column of `y`, and the resulting - coefficients are stored in the corresponding columns of a 2-D return. - The fitted polynomial(s) are in the form - - .. math:: p(x) = c_0 + c_1 * L_1(x) + ... + c_n * L_n(x), - - where ``n`` is `deg`. - - Parameters - ---------- - x : array_like, shape (M,) - x-coordinates of the M sample points ``(x[i], y[i])``. - y : array_like, shape (M,) or (M, K) - y-coordinates of the sample points. Several data sets of sample - points sharing the same x-coordinates can be fitted at once by - passing in a 2D-array that contains one dataset per column. - deg : int or 1-D array_like - Degree(s) of the fitting polynomials. If `deg` is a single integer - all terms up to and including the `deg`'th term are included in the - fit. For NumPy versions >= 1.11.0 a list of integers specifying the - degrees of the terms to include may be used instead. - rcond : float, optional - Relative condition number of the fit. Singular values smaller than - this relative to the largest singular value will be ignored. The - default value is len(x)*eps, where eps is the relative precision of - the float type, about 2e-16 in most cases. - full : bool, optional - Switch determining nature of return value. When it is False (the - default) just the coefficients are returned, when True diagnostic - information from the singular value decomposition is also returned. - w : array_like, shape (`M`,), optional - Weights. If not None, the weight ``w[i]`` applies to the unsquared - residual ``y[i] - y_hat[i]`` at ``x[i]``. Ideally the weights are - chosen so that the errors of the products ``w[i]*y[i]`` all have the - same variance. When using inverse-variance weighting, use - ``w[i] = 1/sigma(y[i])``. The default value is None. - - Returns - ------- - coef : ndarray, shape (M,) or (M, K) - Laguerre coefficients ordered from low to high. If `y` was 2-D, - the coefficients for the data in column *k* of `y` are in column - *k*. - - [residuals, rank, singular_values, rcond] : list - These values are only returned if ``full == True`` - - - residuals -- sum of squared residuals of the least squares fit - - rank -- the numerical rank of the scaled Vandermonde matrix - - singular_values -- singular values of the scaled Vandermonde matrix - - rcond -- value of `rcond`. - - For more details, see `numpy.linalg.lstsq`. - - Warns - ----- - RankWarning - The rank of the coefficient matrix in the least-squares fit is - deficient. The warning is only raised if ``full == False``. The - warnings can be turned off by - - >>> import warnings - >>> warnings.simplefilter('ignore', np.RankWarning) - - See Also - -------- - numpy.polynomial.polynomial.polyfit - numpy.polynomial.legendre.legfit - numpy.polynomial.chebyshev.chebfit - numpy.polynomial.hermite.hermfit - numpy.polynomial.hermite_e.hermefit - lagval : Evaluates a Laguerre series. - lagvander : pseudo Vandermonde matrix of Laguerre series. - lagweight : Laguerre weight function. - numpy.linalg.lstsq : Computes a least-squares fit from the matrix. - scipy.interpolate.UnivariateSpline : Computes spline fits. - - Notes - ----- - The solution is the coefficients of the Laguerre series ``p`` that - minimizes the sum of the weighted squared errors - - .. math:: E = \\sum_j w_j^2 * |y_j - p(x_j)|^2, - - where the :math:`w_j` are the weights. This problem is solved by - setting up as the (typically) overdetermined matrix equation - - .. math:: V(x) * c = w * y, - - where ``V`` is the weighted pseudo Vandermonde matrix of `x`, ``c`` are the - coefficients to be solved for, `w` are the weights, and `y` are the - observed values. This equation is then solved using the singular value - decomposition of ``V``. - - If some of the singular values of `V` are so small that they are - neglected, then a `RankWarning` will be issued. This means that the - coefficient values may be poorly determined. Using a lower order fit - will usually get rid of the warning. The `rcond` parameter can also be - set to a value smaller than its default, but the resulting fit may be - spurious and have large contributions from roundoff error. - - Fits using Laguerre series are probably most useful when the data can - be approximated by ``sqrt(w(x)) * p(x)``, where ``w(x)`` is the Laguerre - weight. In that case the weight ``sqrt(w(x[i]))`` should be used - together with data values ``y[i]/sqrt(w(x[i]))``. The weight function is - available as `lagweight`. - - References - ---------- - .. [1] Wikipedia, "Curve fitting", - https://en.wikipedia.org/wiki/Curve_fitting - - Examples - -------- - >>> from numpy.polynomial.laguerre import lagfit, lagval - >>> x = np.linspace(0, 10) - >>> err = np.random.randn(len(x))/10 - >>> y = lagval(x, [1, 2, 3]) + err - >>> lagfit(x, y, 2) - array([ 0.96971004, 2.00193749, 3.00288744]) # may vary - - """ - return pu._fit(lagvander, x, y, deg, rcond, full, w) - - -def lagcompanion(c): - """ - Return the companion matrix of c. - - The usual companion matrix of the Laguerre polynomials is already - symmetric when `c` is a basis Laguerre polynomial, so no scaling is - applied. - - Parameters - ---------- - c : array_like - 1-D array of Laguerre series coefficients ordered from low to high - degree. - - Returns - ------- - mat : ndarray - Companion matrix of dimensions (deg, deg). - - Notes - ----- - - .. versionadded:: 1.7.0 - - """ - # c is a trimmed copy - [c] = pu.as_series([c]) - if len(c) < 2: - raise ValueError('Series must have maximum degree of at least 1.') - if len(c) == 2: - return np.array([[1 + c[0]/c[1]]]) - - n = len(c) - 1 - mat = np.zeros((n, n), dtype=c.dtype) - top = mat.reshape(-1)[1::n+1] - mid = mat.reshape(-1)[0::n+1] - bot = mat.reshape(-1)[n::n+1] - top[...] = -np.arange(1, n) - mid[...] = 2.*np.arange(n) + 1. - bot[...] = top - mat[:, -1] += (c[:-1]/c[-1])*n - return mat - - -def lagroots(c): - """ - Compute the roots of a Laguerre series. - - Return the roots (a.k.a. "zeros") of the polynomial - - .. math:: p(x) = \\sum_i c[i] * L_i(x). - - Parameters - ---------- - c : 1-D array_like - 1-D array of coefficients. - - Returns - ------- - out : ndarray - Array of the roots of the series. If all the roots are real, - then `out` is also real, otherwise it is complex. - - See Also - -------- - numpy.polynomial.polynomial.polyroots - numpy.polynomial.legendre.legroots - numpy.polynomial.chebyshev.chebroots - numpy.polynomial.hermite.hermroots - numpy.polynomial.hermite_e.hermeroots - - Notes - ----- - The root estimates are obtained as the eigenvalues of the companion - matrix, Roots far from the origin of the complex plane may have large - errors due to the numerical instability of the series for such - values. Roots with multiplicity greater than 1 will also show larger - errors as the value of the series near such points is relatively - insensitive to errors in the roots. Isolated roots near the origin can - be improved by a few iterations of Newton's method. - - The Laguerre series basis polynomials aren't powers of `x` so the - results of this function may seem unintuitive. - - Examples - -------- - >>> from numpy.polynomial.laguerre import lagroots, lagfromroots - >>> coef = lagfromroots([0, 1, 2]) - >>> coef - array([ 2., -8., 12., -6.]) - >>> lagroots(coef) - array([-4.4408921e-16, 1.0000000e+00, 2.0000000e+00]) - - """ - # c is a trimmed copy - [c] = pu.as_series([c]) - if len(c) <= 1: - return np.array([], dtype=c.dtype) - if len(c) == 2: - return np.array([1 + c[0]/c[1]]) - - # rotated companion matrix reduces error - m = lagcompanion(c)[::-1,::-1] - r = la.eigvals(m) - r.sort() - return r - - -def laggauss(deg): - """ - Gauss-Laguerre quadrature. - - Computes the sample points and weights for Gauss-Laguerre quadrature. - These sample points and weights will correctly integrate polynomials of - degree :math:`2*deg - 1` or less over the interval :math:`[0, \\inf]` - with the weight function :math:`f(x) = \\exp(-x)`. - - Parameters - ---------- - deg : int - Number of sample points and weights. It must be >= 1. - - Returns - ------- - x : ndarray - 1-D ndarray containing the sample points. - y : ndarray - 1-D ndarray containing the weights. - - Notes - ----- - - .. versionadded:: 1.7.0 - - The results have only been tested up to degree 100 higher degrees may - be problematic. The weights are determined by using the fact that - - .. math:: w_k = c / (L'_n(x_k) * L_{n-1}(x_k)) - - where :math:`c` is a constant independent of :math:`k` and :math:`x_k` - is the k'th root of :math:`L_n`, and then scaling the results to get - the right value when integrating 1. - - """ - ideg = pu._deprecate_as_int(deg, "deg") - if ideg <= 0: - raise ValueError("deg must be a positive integer") - - # first approximation of roots. We use the fact that the companion - # matrix is symmetric in this case in order to obtain better zeros. - c = np.array([0]*deg + [1]) - m = lagcompanion(c) - x = la.eigvalsh(m) - - # improve roots by one application of Newton - dy = lagval(x, c) - df = lagval(x, lagder(c)) - x -= dy/df - - # compute the weights. We scale the factor to avoid possible numerical - # overflow. - fm = lagval(x, c[1:]) - fm /= np.abs(fm).max() - df /= np.abs(df).max() - w = 1/(fm * df) - - # scale w to get the right value, 1 in this case - w /= w.sum() - - return x, w - - -def lagweight(x): - """Weight function of the Laguerre polynomials. - - The weight function is :math:`exp(-x)` and the interval of integration - is :math:`[0, \\inf]`. The Laguerre polynomials are orthogonal, but not - normalized, with respect to this weight function. - - Parameters - ---------- - x : array_like - Values at which the weight function will be computed. - - Returns - ------- - w : ndarray - The weight function at `x`. - - Notes - ----- - - .. versionadded:: 1.7.0 - - """ - w = np.exp(-x) - return w - -# -# Laguerre series class -# - -class Laguerre(ABCPolyBase): - """A Laguerre series class. - - The Laguerre class provides the standard Python numerical methods - '+', '-', '*', '//', '%', 'divmod', '**', and '()' as well as the - attributes and methods listed in the `ABCPolyBase` documentation. - - Parameters - ---------- - coef : array_like - Laguerre coefficients in order of increasing degree, i.e, - ``(1, 2, 3)`` gives ``1*L_0(x) + 2*L_1(X) + 3*L_2(x)``. - domain : (2,) array_like, optional - Domain to use. The interval ``[domain[0], domain[1]]`` is mapped - to the interval ``[window[0], window[1]]`` by shifting and scaling. - The default value is [0, 1]. - window : (2,) array_like, optional - Window, see `domain` for its use. The default value is [0, 1]. - - .. versionadded:: 1.6.0 - symbol : str, optional - Symbol used to represent the independent variable in string - representations of the polynomial expression, e.g. for printing. - The symbol must be a valid Python identifier. Default value is 'x'. - - .. versionadded:: 1.24 - - """ - # Virtual Functions - _add = staticmethod(lagadd) - _sub = staticmethod(lagsub) - _mul = staticmethod(lagmul) - _div = staticmethod(lagdiv) - _pow = staticmethod(lagpow) - _val = staticmethod(lagval) - _int = staticmethod(lagint) - _der = staticmethod(lagder) - _fit = staticmethod(lagfit) - _line = staticmethod(lagline) - _roots = staticmethod(lagroots) - _fromroots = staticmethod(lagfromroots) - - # Virtual properties - domain = np.array(lagdomain) - window = np.array(lagdomain) - basis_name = 'L' diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/api_resources/abstract/updateable_api_resource.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/api_resources/abstract/updateable_api_resource.py deleted file mode 100644 index 245f9b80b347e8912c8a96652652f5ba9672451a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/openai/api_resources/abstract/updateable_api_resource.py +++ /dev/null @@ -1,16 +0,0 @@ -from urllib.parse import quote_plus -from typing import Awaitable - -from openai.api_resources.abstract.api_resource import APIResource - - -class UpdateableAPIResource(APIResource): - @classmethod - def modify(cls, sid, **params): - url = "%s/%s" % (cls.class_url(), quote_plus(sid)) - return cls._static_request("post", url, **params) - - @classmethod - def amodify(cls, sid, **params) -> Awaitable: - url = "%s/%s" % (cls.class_url(), quote_plus(sid)) - return cls._astatic_request("patch", url, **params) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/api/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/api/__init__.py deleted file mode 100644 index a0d42b6541fdf8817b996ef9804db8a87a2bcd2c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/api/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -""" public toolkit API """ -from pandas.api import ( - extensions, - indexers, - interchange, - types, - typing, -) - -__all__ = [ - "interchange", - "extensions", - "indexers", - "types", - "typing", -] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/window/ewm.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/window/ewm.py deleted file mode 100644 index 775f3cd4286773e50f2ec6ce93191ff229186599..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/core/window/ewm.py +++ /dev/null @@ -1,1085 +0,0 @@ -from __future__ import annotations - -import datetime -from functools import partial -from textwrap import dedent -from typing import TYPE_CHECKING - -import numpy as np - -from pandas._libs.tslibs import Timedelta -import pandas._libs.window.aggregations as window_aggregations -from pandas.util._decorators import doc - -from pandas.core.dtypes.common import ( - is_datetime64_ns_dtype, - is_numeric_dtype, -) -from pandas.core.dtypes.missing import isna - -from pandas.core import common -from pandas.core.indexers.objects import ( - BaseIndexer, - ExponentialMovingWindowIndexer, - GroupbyIndexer, -) -from pandas.core.util.numba_ import ( - get_jit_arguments, - maybe_use_numba, -) -from pandas.core.window.common import zsqrt -from pandas.core.window.doc import ( - _shared_docs, - create_section_header, - kwargs_numeric_only, - numba_notes, - template_header, - template_returns, - template_see_also, - window_agg_numba_parameters, -) -from pandas.core.window.numba_ import ( - generate_numba_ewm_func, - generate_numba_ewm_table_func, -) -from pandas.core.window.online import ( - EWMMeanState, - generate_online_numba_ewma_func, -) -from pandas.core.window.rolling import ( - BaseWindow, - BaseWindowGroupby, -) - -if TYPE_CHECKING: - from pandas._typing import ( - Axis, - TimedeltaConvertibleTypes, - ) - - from pandas import ( - DataFrame, - Series, - ) - from pandas.core.generic import NDFrame - - -def get_center_of_mass( - comass: float | None, - span: float | None, - halflife: float | None, - alpha: float | None, -) -> float: - valid_count = common.count_not_none(comass, span, halflife, alpha) - if valid_count > 1: - raise ValueError("comass, span, halflife, and alpha are mutually exclusive") - - # Convert to center of mass; domain checks ensure 0 < alpha <= 1 - if comass is not None: - if comass < 0: - raise ValueError("comass must satisfy: comass >= 0") - elif span is not None: - if span < 1: - raise ValueError("span must satisfy: span >= 1") - comass = (span - 1) / 2 - elif halflife is not None: - if halflife <= 0: - raise ValueError("halflife must satisfy: halflife > 0") - decay = 1 - np.exp(np.log(0.5) / halflife) - comass = 1 / decay - 1 - elif alpha is not None: - if alpha <= 0 or alpha > 1: - raise ValueError("alpha must satisfy: 0 < alpha <= 1") - comass = (1 - alpha) / alpha - else: - raise ValueError("Must pass one of comass, span, halflife, or alpha") - - return float(comass) - - -def _calculate_deltas( - times: np.ndarray | NDFrame, - halflife: float | TimedeltaConvertibleTypes | None, -) -> np.ndarray: - """ - Return the diff of the times divided by the half-life. These values are used in - the calculation of the ewm mean. - - Parameters - ---------- - times : np.ndarray, Series - Times corresponding to the observations. Must be monotonically increasing - and ``datetime64[ns]`` dtype. - halflife : float, str, timedelta, optional - Half-life specifying the decay - - Returns - ------- - np.ndarray - Diff of the times divided by the half-life - """ - _times = np.asarray(times.view(np.int64), dtype=np.float64) - # TODO: generalize to non-nano? - _halflife = float(Timedelta(halflife).as_unit("ns")._value) - return np.diff(_times) / _halflife - - -class ExponentialMovingWindow(BaseWindow): - r""" - Provide exponentially weighted (EW) calculations. - - Exactly one of ``com``, ``span``, ``halflife``, or ``alpha`` must be - provided if ``times`` is not provided. If ``times`` is provided, - ``halflife`` and one of ``com``, ``span`` or ``alpha`` may be provided. - - Parameters - ---------- - com : float, optional - Specify decay in terms of center of mass - - :math:`\alpha = 1 / (1 + com)`, for :math:`com \geq 0`. - - span : float, optional - Specify decay in terms of span - - :math:`\alpha = 2 / (span + 1)`, for :math:`span \geq 1`. - - halflife : float, str, timedelta, optional - Specify decay in terms of half-life - - :math:`\alpha = 1 - \exp\left(-\ln(2) / halflife\right)`, for - :math:`halflife > 0`. - - If ``times`` is specified, a timedelta convertible unit over which an - observation decays to half its value. Only applicable to ``mean()``, - and halflife value will not apply to the other functions. - - alpha : float, optional - Specify smoothing factor :math:`\alpha` directly - - :math:`0 < \alpha \leq 1`. - - min_periods : int, default 0 - Minimum number of observations in window required to have a value; - otherwise, result is ``np.nan``. - - adjust : bool, default True - Divide by decaying adjustment factor in beginning periods to account - for imbalance in relative weightings (viewing EWMA as a moving average). - - - When ``adjust=True`` (default), the EW function is calculated using weights - :math:`w_i = (1 - \alpha)^i`. For example, the EW moving average of the series - [:math:`x_0, x_1, ..., x_t`] would be: - - .. math:: - y_t = \frac{x_t + (1 - \alpha)x_{t-1} + (1 - \alpha)^2 x_{t-2} + ... + (1 - - \alpha)^t x_0}{1 + (1 - \alpha) + (1 - \alpha)^2 + ... + (1 - \alpha)^t} - - - When ``adjust=False``, the exponentially weighted function is calculated - recursively: - - .. math:: - \begin{split} - y_0 &= x_0\\ - y_t &= (1 - \alpha) y_{t-1} + \alpha x_t, - \end{split} - ignore_na : bool, default False - Ignore missing values when calculating weights. - - - When ``ignore_na=False`` (default), weights are based on absolute positions. - For example, the weights of :math:`x_0` and :math:`x_2` used in calculating - the final weighted average of [:math:`x_0`, None, :math:`x_2`] are - :math:`(1-\alpha)^2` and :math:`1` if ``adjust=True``, and - :math:`(1-\alpha)^2` and :math:`\alpha` if ``adjust=False``. - - - When ``ignore_na=True``, weights are based - on relative positions. For example, the weights of :math:`x_0` and :math:`x_2` - used in calculating the final weighted average of - [:math:`x_0`, None, :math:`x_2`] are :math:`1-\alpha` and :math:`1` if - ``adjust=True``, and :math:`1-\alpha` and :math:`\alpha` if ``adjust=False``. - - axis : {0, 1}, default 0 - If ``0`` or ``'index'``, calculate across the rows. - - If ``1`` or ``'columns'``, calculate across the columns. - - For `Series` this parameter is unused and defaults to 0. - - times : np.ndarray, Series, default None - - Only applicable to ``mean()``. - - Times corresponding to the observations. Must be monotonically increasing and - ``datetime64[ns]`` dtype. - - If 1-D array like, a sequence with the same shape as the observations. - - method : str {'single', 'table'}, default 'single' - .. versionadded:: 1.4.0 - - Execute the rolling operation per single column or row (``'single'``) - or over the entire object (``'table'``). - - This argument is only implemented when specifying ``engine='numba'`` - in the method call. - - Only applicable to ``mean()`` - - Returns - ------- - pandas.api.typing.ExponentialMovingWindow - - See Also - -------- - rolling : Provides rolling window calculations. - expanding : Provides expanding transformations. - - Notes - ----- - See :ref:`Windowing Operations ` - for further usage details and examples. - - Examples - -------- - >>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]}) - >>> df - B - 0 0.0 - 1 1.0 - 2 2.0 - 3 NaN - 4 4.0 - - >>> df.ewm(com=0.5).mean() - B - 0 0.000000 - 1 0.750000 - 2 1.615385 - 3 1.615385 - 4 3.670213 - >>> df.ewm(alpha=2 / 3).mean() - B - 0 0.000000 - 1 0.750000 - 2 1.615385 - 3 1.615385 - 4 3.670213 - - **adjust** - - >>> df.ewm(com=0.5, adjust=True).mean() - B - 0 0.000000 - 1 0.750000 - 2 1.615385 - 3 1.615385 - 4 3.670213 - >>> df.ewm(com=0.5, adjust=False).mean() - B - 0 0.000000 - 1 0.666667 - 2 1.555556 - 3 1.555556 - 4 3.650794 - - **ignore_na** - - >>> df.ewm(com=0.5, ignore_na=True).mean() - B - 0 0.000000 - 1 0.750000 - 2 1.615385 - 3 1.615385 - 4 3.225000 - >>> df.ewm(com=0.5, ignore_na=False).mean() - B - 0 0.000000 - 1 0.750000 - 2 1.615385 - 3 1.615385 - 4 3.670213 - - **times** - - Exponentially weighted mean with weights calculated with a timedelta ``halflife`` - relative to ``times``. - - >>> times = ['2020-01-01', '2020-01-03', '2020-01-10', '2020-01-15', '2020-01-17'] - >>> df.ewm(halflife='4 days', times=pd.DatetimeIndex(times)).mean() - B - 0 0.000000 - 1 0.585786 - 2 1.523889 - 3 1.523889 - 4 3.233686 - """ - - _attributes = [ - "com", - "span", - "halflife", - "alpha", - "min_periods", - "adjust", - "ignore_na", - "axis", - "times", - "method", - ] - - def __init__( - self, - obj: NDFrame, - com: float | None = None, - span: float | None = None, - halflife: float | TimedeltaConvertibleTypes | None = None, - alpha: float | None = None, - min_periods: int | None = 0, - adjust: bool = True, - ignore_na: bool = False, - axis: Axis = 0, - times: np.ndarray | NDFrame | None = None, - method: str = "single", - *, - selection=None, - ) -> None: - super().__init__( - obj=obj, - min_periods=1 if min_periods is None else max(int(min_periods), 1), - on=None, - center=False, - closed=None, - method=method, - axis=axis, - selection=selection, - ) - self.com = com - self.span = span - self.halflife = halflife - self.alpha = alpha - self.adjust = adjust - self.ignore_na = ignore_na - self.times = times - if self.times is not None: - if not self.adjust: - raise NotImplementedError("times is not supported with adjust=False.") - if not is_datetime64_ns_dtype(self.times): - raise ValueError("times must be datetime64[ns] dtype.") - if len(self.times) != len(obj): - raise ValueError("times must be the same length as the object.") - if not isinstance(self.halflife, (str, datetime.timedelta, np.timedelta64)): - raise ValueError("halflife must be a timedelta convertible object") - if isna(self.times).any(): - raise ValueError("Cannot convert NaT values to integer") - self._deltas = _calculate_deltas(self.times, self.halflife) - # Halflife is no longer applicable when calculating COM - # But allow COM to still be calculated if the user passes other decay args - if common.count_not_none(self.com, self.span, self.alpha) > 0: - self._com = get_center_of_mass(self.com, self.span, None, self.alpha) - else: - self._com = 1.0 - else: - if self.halflife is not None and isinstance( - self.halflife, (str, datetime.timedelta, np.timedelta64) - ): - raise ValueError( - "halflife can only be a timedelta convertible argument if " - "times is not None." - ) - # Without times, points are equally spaced - self._deltas = np.ones( - max(self.obj.shape[self.axis] - 1, 0), dtype=np.float64 - ) - self._com = get_center_of_mass( - # error: Argument 3 to "get_center_of_mass" has incompatible type - # "Union[float, Any, None, timedelta64, signedinteger[_64Bit]]"; - # expected "Optional[float]" - self.com, - self.span, - self.halflife, # type: ignore[arg-type] - self.alpha, - ) - - def _check_window_bounds( - self, start: np.ndarray, end: np.ndarray, num_vals: int - ) -> None: - # emw algorithms are iterative with each point - # ExponentialMovingWindowIndexer "bounds" are the entire window - pass - - def _get_window_indexer(self) -> BaseIndexer: - """ - Return an indexer class that will compute the window start and end bounds - """ - return ExponentialMovingWindowIndexer() - - def online( - self, engine: str = "numba", engine_kwargs=None - ) -> OnlineExponentialMovingWindow: - """ - Return an ``OnlineExponentialMovingWindow`` object to calculate - exponentially moving window aggregations in an online method. - - .. versionadded:: 1.3.0 - - Parameters - ---------- - engine: str, default ``'numba'`` - Execution engine to calculate online aggregations. - Applies to all supported aggregation methods. - - engine_kwargs : dict, default None - Applies to all supported aggregation methods. - - * For ``'numba'`` engine, the engine can accept ``nopython``, ``nogil`` - and ``parallel`` dictionary keys. The values must either be ``True`` or - ``False``. The default ``engine_kwargs`` for the ``'numba'`` engine is - ``{{'nopython': True, 'nogil': False, 'parallel': False}}`` and will be - applied to the function - - Returns - ------- - OnlineExponentialMovingWindow - """ - return OnlineExponentialMovingWindow( - obj=self.obj, - com=self.com, - span=self.span, - halflife=self.halflife, - alpha=self.alpha, - min_periods=self.min_periods, - adjust=self.adjust, - ignore_na=self.ignore_na, - axis=self.axis, - times=self.times, - engine=engine, - engine_kwargs=engine_kwargs, - selection=self._selection, - ) - - @doc( - _shared_docs["aggregate"], - see_also=dedent( - """ - See Also - -------- - pandas.DataFrame.rolling.aggregate - """ - ), - examples=dedent( - """ - Examples - -------- - >>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]}) - >>> df - A B C - 0 1 4 7 - 1 2 5 8 - 2 3 6 9 - - >>> df.ewm(alpha=0.5).mean() - A B C - 0 1.000000 4.000000 7.000000 - 1 1.666667 4.666667 7.666667 - 2 2.428571 5.428571 8.428571 - """ - ), - klass="Series/Dataframe", - axis="", - ) - def aggregate(self, func, *args, **kwargs): - return super().aggregate(func, *args, **kwargs) - - agg = aggregate - - @doc( - template_header, - create_section_header("Parameters"), - kwargs_numeric_only, - window_agg_numba_parameters(), - create_section_header("Returns"), - template_returns, - create_section_header("See Also"), - template_see_also, - create_section_header("Notes"), - numba_notes, - create_section_header("Examples"), - dedent( - """\ - >>> ser = pd.Series([1, 2, 3, 4]) - >>> ser.ewm(alpha=.2).mean() - 0 1.000000 - 1 1.555556 - 2 2.147541 - 3 2.775068 - dtype: float64 - """ - ), - window_method="ewm", - aggregation_description="(exponential weighted moment) mean", - agg_method="mean", - ) - def mean( - self, - numeric_only: bool = False, - engine=None, - engine_kwargs=None, - ): - if maybe_use_numba(engine): - if self.method == "single": - func = generate_numba_ewm_func - else: - func = generate_numba_ewm_table_func - ewm_func = func( - **get_jit_arguments(engine_kwargs), - com=self._com, - adjust=self.adjust, - ignore_na=self.ignore_na, - deltas=tuple(self._deltas), - normalize=True, - ) - return self._apply(ewm_func, name="mean") - elif engine in ("cython", None): - if engine_kwargs is not None: - raise ValueError("cython engine does not accept engine_kwargs") - - deltas = None if self.times is None else self._deltas - window_func = partial( - window_aggregations.ewm, - com=self._com, - adjust=self.adjust, - ignore_na=self.ignore_na, - deltas=deltas, - normalize=True, - ) - return self._apply(window_func, name="mean", numeric_only=numeric_only) - else: - raise ValueError("engine must be either 'numba' or 'cython'") - - @doc( - template_header, - create_section_header("Parameters"), - kwargs_numeric_only, - window_agg_numba_parameters(), - create_section_header("Returns"), - template_returns, - create_section_header("See Also"), - template_see_also, - create_section_header("Notes"), - numba_notes, - create_section_header("Examples"), - dedent( - """\ - >>> ser = pd.Series([1, 2, 3, 4]) - >>> ser.ewm(alpha=.2).sum() - 0 1.000 - 1 2.800 - 2 5.240 - 3 8.192 - dtype: float64 - """ - ), - window_method="ewm", - aggregation_description="(exponential weighted moment) sum", - agg_method="sum", - ) - def sum( - self, - numeric_only: bool = False, - engine=None, - engine_kwargs=None, - ): - if not self.adjust: - raise NotImplementedError("sum is not implemented with adjust=False") - if maybe_use_numba(engine): - if self.method == "single": - func = generate_numba_ewm_func - else: - func = generate_numba_ewm_table_func - ewm_func = func( - **get_jit_arguments(engine_kwargs), - com=self._com, - adjust=self.adjust, - ignore_na=self.ignore_na, - deltas=tuple(self._deltas), - normalize=False, - ) - return self._apply(ewm_func, name="sum") - elif engine in ("cython", None): - if engine_kwargs is not None: - raise ValueError("cython engine does not accept engine_kwargs") - - deltas = None if self.times is None else self._deltas - window_func = partial( - window_aggregations.ewm, - com=self._com, - adjust=self.adjust, - ignore_na=self.ignore_na, - deltas=deltas, - normalize=False, - ) - return self._apply(window_func, name="sum", numeric_only=numeric_only) - else: - raise ValueError("engine must be either 'numba' or 'cython'") - - @doc( - template_header, - create_section_header("Parameters"), - dedent( - """\ - bias : bool, default False - Use a standard estimation bias correction. - """ - ), - kwargs_numeric_only, - create_section_header("Returns"), - template_returns, - create_section_header("See Also"), - template_see_also, - create_section_header("Examples"), - dedent( - """\ - >>> ser = pd.Series([1, 2, 3, 4]) - >>> ser.ewm(alpha=.2).std() - 0 NaN - 1 0.707107 - 2 0.995893 - 3 1.277320 - dtype: float64 - """ - ), - window_method="ewm", - aggregation_description="(exponential weighted moment) standard deviation", - agg_method="std", - ) - def std(self, bias: bool = False, numeric_only: bool = False): - if ( - numeric_only - and self._selected_obj.ndim == 1 - and not is_numeric_dtype(self._selected_obj.dtype) - ): - # Raise directly so error message says std instead of var - raise NotImplementedError( - f"{type(self).__name__}.std does not implement numeric_only" - ) - return zsqrt(self.var(bias=bias, numeric_only=numeric_only)) - - @doc( - template_header, - create_section_header("Parameters"), - dedent( - """\ - bias : bool, default False - Use a standard estimation bias correction. - """ - ), - kwargs_numeric_only, - create_section_header("Returns"), - template_returns, - create_section_header("See Also"), - template_see_also, - create_section_header("Examples"), - dedent( - """\ - >>> ser = pd.Series([1, 2, 3, 4]) - >>> ser.ewm(alpha=.2).var() - 0 NaN - 1 0.500000 - 2 0.991803 - 3 1.631547 - dtype: float64 - """ - ), - window_method="ewm", - aggregation_description="(exponential weighted moment) variance", - agg_method="var", - ) - def var(self, bias: bool = False, numeric_only: bool = False): - window_func = window_aggregations.ewmcov - wfunc = partial( - window_func, - com=self._com, - adjust=self.adjust, - ignore_na=self.ignore_na, - bias=bias, - ) - - def var_func(values, begin, end, min_periods): - return wfunc(values, begin, end, min_periods, values) - - return self._apply(var_func, name="var", numeric_only=numeric_only) - - @doc( - template_header, - create_section_header("Parameters"), - dedent( - """\ - other : Series or DataFrame , optional - If not supplied then will default to self and produce pairwise - output. - pairwise : bool, default None - If False then only matching columns between self and other will be - used and the output will be a DataFrame. - If True then all pairwise combinations will be calculated and the - output will be a MultiIndex DataFrame in the case of DataFrame - inputs. In the case of missing elements, only complete pairwise - observations will be used. - bias : bool, default False - Use a standard estimation bias correction. - """ - ), - kwargs_numeric_only, - create_section_header("Returns"), - template_returns, - create_section_header("See Also"), - template_see_also, - create_section_header("Examples"), - dedent( - """\ - >>> ser1 = pd.Series([1, 2, 3, 4]) - >>> ser2 = pd.Series([10, 11, 13, 16]) - >>> ser1.ewm(alpha=.2).cov(ser2) - 0 NaN - 1 0.500000 - 2 1.524590 - 3 3.408836 - dtype: float64 - """ - ), - window_method="ewm", - aggregation_description="(exponential weighted moment) sample covariance", - agg_method="cov", - ) - def cov( - self, - other: DataFrame | Series | None = None, - pairwise: bool | None = None, - bias: bool = False, - numeric_only: bool = False, - ): - from pandas import Series - - self._validate_numeric_only("cov", numeric_only) - - def cov_func(x, y): - x_array = self._prep_values(x) - y_array = self._prep_values(y) - window_indexer = self._get_window_indexer() - min_periods = ( - self.min_periods - if self.min_periods is not None - else window_indexer.window_size - ) - start, end = window_indexer.get_window_bounds( - num_values=len(x_array), - min_periods=min_periods, - center=self.center, - closed=self.closed, - step=self.step, - ) - result = window_aggregations.ewmcov( - x_array, - start, - end, - # error: Argument 4 to "ewmcov" has incompatible type - # "Optional[int]"; expected "int" - self.min_periods, # type: ignore[arg-type] - y_array, - self._com, - self.adjust, - self.ignore_na, - bias, - ) - return Series(result, index=x.index, name=x.name, copy=False) - - return self._apply_pairwise( - self._selected_obj, other, pairwise, cov_func, numeric_only - ) - - @doc( - template_header, - create_section_header("Parameters"), - dedent( - """\ - other : Series or DataFrame, optional - If not supplied then will default to self and produce pairwise - output. - pairwise : bool, default None - If False then only matching columns between self and other will be - used and the output will be a DataFrame. - If True then all pairwise combinations will be calculated and the - output will be a MultiIndex DataFrame in the case of DataFrame - inputs. In the case of missing elements, only complete pairwise - observations will be used. - """ - ), - kwargs_numeric_only, - create_section_header("Returns"), - template_returns, - create_section_header("See Also"), - template_see_also, - create_section_header("Examples"), - dedent( - """\ - >>> ser1 = pd.Series([1, 2, 3, 4]) - >>> ser2 = pd.Series([10, 11, 13, 16]) - >>> ser1.ewm(alpha=.2).corr(ser2) - 0 NaN - 1 1.000000 - 2 0.982821 - 3 0.977802 - dtype: float64 - """ - ), - window_method="ewm", - aggregation_description="(exponential weighted moment) sample correlation", - agg_method="corr", - ) - def corr( - self, - other: DataFrame | Series | None = None, - pairwise: bool | None = None, - numeric_only: bool = False, - ): - from pandas import Series - - self._validate_numeric_only("corr", numeric_only) - - def cov_func(x, y): - x_array = self._prep_values(x) - y_array = self._prep_values(y) - window_indexer = self._get_window_indexer() - min_periods = ( - self.min_periods - if self.min_periods is not None - else window_indexer.window_size - ) - start, end = window_indexer.get_window_bounds( - num_values=len(x_array), - min_periods=min_periods, - center=self.center, - closed=self.closed, - step=self.step, - ) - - def _cov(X, Y): - return window_aggregations.ewmcov( - X, - start, - end, - min_periods, - Y, - self._com, - self.adjust, - self.ignore_na, - True, - ) - - with np.errstate(all="ignore"): - cov = _cov(x_array, y_array) - x_var = _cov(x_array, x_array) - y_var = _cov(y_array, y_array) - result = cov / zsqrt(x_var * y_var) - return Series(result, index=x.index, name=x.name, copy=False) - - return self._apply_pairwise( - self._selected_obj, other, pairwise, cov_func, numeric_only - ) - - -class ExponentialMovingWindowGroupby(BaseWindowGroupby, ExponentialMovingWindow): - """ - Provide an exponential moving window groupby implementation. - """ - - _attributes = ExponentialMovingWindow._attributes + BaseWindowGroupby._attributes - - def __init__(self, obj, *args, _grouper=None, **kwargs) -> None: - super().__init__(obj, *args, _grouper=_grouper, **kwargs) - - if not obj.empty and self.times is not None: - # sort the times and recalculate the deltas according to the groups - groupby_order = np.concatenate(list(self._grouper.indices.values())) - self._deltas = _calculate_deltas( - self.times.take(groupby_order), - self.halflife, - ) - - def _get_window_indexer(self) -> GroupbyIndexer: - """ - Return an indexer class that will compute the window start and end bounds - - Returns - ------- - GroupbyIndexer - """ - window_indexer = GroupbyIndexer( - groupby_indices=self._grouper.indices, - window_indexer=ExponentialMovingWindowIndexer, - ) - return window_indexer - - -class OnlineExponentialMovingWindow(ExponentialMovingWindow): - def __init__( - self, - obj: NDFrame, - com: float | None = None, - span: float | None = None, - halflife: float | TimedeltaConvertibleTypes | None = None, - alpha: float | None = None, - min_periods: int | None = 0, - adjust: bool = True, - ignore_na: bool = False, - axis: Axis = 0, - times: np.ndarray | NDFrame | None = None, - engine: str = "numba", - engine_kwargs: dict[str, bool] | None = None, - *, - selection=None, - ) -> None: - if times is not None: - raise NotImplementedError( - "times is not implemented with online operations." - ) - super().__init__( - obj=obj, - com=com, - span=span, - halflife=halflife, - alpha=alpha, - min_periods=min_periods, - adjust=adjust, - ignore_na=ignore_na, - axis=axis, - times=times, - selection=selection, - ) - self._mean = EWMMeanState( - self._com, self.adjust, self.ignore_na, self.axis, obj.shape - ) - if maybe_use_numba(engine): - self.engine = engine - self.engine_kwargs = engine_kwargs - else: - raise ValueError("'numba' is the only supported engine") - - def reset(self) -> None: - """ - Reset the state captured by `update` calls. - """ - self._mean.reset() - - def aggregate(self, func, *args, **kwargs): - raise NotImplementedError("aggregate is not implemented.") - - def std(self, bias: bool = False, *args, **kwargs): - raise NotImplementedError("std is not implemented.") - - def corr( - self, - other: DataFrame | Series | None = None, - pairwise: bool | None = None, - numeric_only: bool = False, - ): - raise NotImplementedError("corr is not implemented.") - - def cov( - self, - other: DataFrame | Series | None = None, - pairwise: bool | None = None, - bias: bool = False, - numeric_only: bool = False, - ): - raise NotImplementedError("cov is not implemented.") - - def var(self, bias: bool = False, numeric_only: bool = False): - raise NotImplementedError("var is not implemented.") - - def mean(self, *args, update=None, update_times=None, **kwargs): - """ - Calculate an online exponentially weighted mean. - - Parameters - ---------- - update: DataFrame or Series, default None - New values to continue calculating the - exponentially weighted mean from the last values and weights. - Values should be float64 dtype. - - ``update`` needs to be ``None`` the first time the - exponentially weighted mean is calculated. - - update_times: Series or 1-D np.ndarray, default None - New times to continue calculating the - exponentially weighted mean from the last values and weights. - If ``None``, values are assumed to be evenly spaced - in time. - This feature is currently unsupported. - - Returns - ------- - DataFrame or Series - - Examples - -------- - >>> df = pd.DataFrame({"a": range(5), "b": range(5, 10)}) - >>> online_ewm = df.head(2).ewm(0.5).online() - >>> online_ewm.mean() - a b - 0 0.00 5.00 - 1 0.75 5.75 - >>> online_ewm.mean(update=df.tail(3)) - a b - 2 1.615385 6.615385 - 3 2.550000 7.550000 - 4 3.520661 8.520661 - >>> online_ewm.reset() - >>> online_ewm.mean() - a b - 0 0.00 5.00 - 1 0.75 5.75 - """ - result_kwargs = {} - is_frame = self._selected_obj.ndim == 2 - if update_times is not None: - raise NotImplementedError("update_times is not implemented.") - update_deltas = np.ones( - max(self._selected_obj.shape[self.axis - 1] - 1, 0), dtype=np.float64 - ) - if update is not None: - if self._mean.last_ewm is None: - raise ValueError( - "Must call mean with update=None first before passing update" - ) - result_from = 1 - result_kwargs["index"] = update.index - if is_frame: - last_value = self._mean.last_ewm[np.newaxis, :] - result_kwargs["columns"] = update.columns - else: - last_value = self._mean.last_ewm - result_kwargs["name"] = update.name - np_array = np.concatenate((last_value, update.to_numpy())) - else: - result_from = 0 - result_kwargs["index"] = self._selected_obj.index - if is_frame: - result_kwargs["columns"] = self._selected_obj.columns - else: - result_kwargs["name"] = self._selected_obj.name - np_array = self._selected_obj.astype(np.float64).to_numpy() - ewma_func = generate_online_numba_ewma_func( - **get_jit_arguments(self.engine_kwargs) - ) - result = self._mean.run_ewm( - np_array if is_frame else np_array[:, np.newaxis], - update_deltas, - self.min_periods, - ewma_func, - ) - if not is_frame: - result = result.squeeze() - result = result[result_from:] - result = self._selected_obj._constructor(result, **result_kwargs) - return result diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/reshape/concat/test_categorical.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/reshape/concat/test_categorical.py deleted file mode 100644 index 2730b2ffcc4e3f8fd641916c2505b75a8fa157d9..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/reshape/concat/test_categorical.py +++ /dev/null @@ -1,271 +0,0 @@ -from datetime import datetime - -import numpy as np - -from pandas.core.dtypes.dtypes import CategoricalDtype - -import pandas as pd -from pandas import ( - Categorical, - DataFrame, - Series, -) -import pandas._testing as tm - - -class TestCategoricalConcat: - def test_categorical_concat(self, sort): - # See GH 10177 - df1 = DataFrame( - np.arange(18, dtype="int64").reshape(6, 3), columns=["a", "b", "c"] - ) - - df2 = DataFrame(np.arange(14, dtype="int64").reshape(7, 2), columns=["a", "c"]) - - cat_values = ["one", "one", "two", "one", "two", "two", "one"] - df2["h"] = Series(Categorical(cat_values)) - - res = pd.concat((df1, df2), axis=0, ignore_index=True, sort=sort) - exp = DataFrame( - { - "a": [0, 3, 6, 9, 12, 15, 0, 2, 4, 6, 8, 10, 12], - "b": [ - 1, - 4, - 7, - 10, - 13, - 16, - np.nan, - np.nan, - np.nan, - np.nan, - np.nan, - np.nan, - np.nan, - ], - "c": [2, 5, 8, 11, 14, 17, 1, 3, 5, 7, 9, 11, 13], - "h": [None] * 6 + cat_values, - } - ) - exp["h"] = exp["h"].astype(df2["h"].dtype) - tm.assert_frame_equal(res, exp) - - def test_categorical_concat_dtypes(self): - # GH8143 - index = ["cat", "obj", "num"] - cat = Categorical(["a", "b", "c"]) - obj = Series(["a", "b", "c"]) - num = Series([1, 2, 3]) - df = pd.concat([Series(cat), obj, num], axis=1, keys=index) - - result = df.dtypes == "object" - expected = Series([False, True, False], index=index) - tm.assert_series_equal(result, expected) - - result = df.dtypes == "int64" - expected = Series([False, False, True], index=index) - tm.assert_series_equal(result, expected) - - result = df.dtypes == "category" - expected = Series([True, False, False], index=index) - tm.assert_series_equal(result, expected) - - def test_concat_categoricalindex(self): - # GH 16111, categories that aren't lexsorted - categories = [9, 0, 1, 2, 3] - - a = Series(1, index=pd.CategoricalIndex([9, 0], categories=categories)) - b = Series(2, index=pd.CategoricalIndex([0, 1], categories=categories)) - c = Series(3, index=pd.CategoricalIndex([1, 2], categories=categories)) - - result = pd.concat([a, b, c], axis=1) - - exp_idx = pd.CategoricalIndex([9, 0, 1, 2], categories=categories) - exp = DataFrame( - { - 0: [1, 1, np.nan, np.nan], - 1: [np.nan, 2, 2, np.nan], - 2: [np.nan, np.nan, 3, 3], - }, - columns=[0, 1, 2], - index=exp_idx, - ) - tm.assert_frame_equal(result, exp) - - def test_categorical_concat_preserve(self): - # GH 8641 series concat not preserving category dtype - # GH 13524 can concat different categories - s = Series(list("abc"), dtype="category") - s2 = Series(list("abd"), dtype="category") - - exp = Series(list("abcabd")) - res = pd.concat([s, s2], ignore_index=True) - tm.assert_series_equal(res, exp) - - exp = Series(list("abcabc"), dtype="category") - res = pd.concat([s, s], ignore_index=True) - tm.assert_series_equal(res, exp) - - exp = Series(list("abcabc"), index=[0, 1, 2, 0, 1, 2], dtype="category") - res = pd.concat([s, s]) - tm.assert_series_equal(res, exp) - - a = Series(np.arange(6, dtype="int64")) - b = Series(list("aabbca")) - - df2 = DataFrame({"A": a, "B": b.astype(CategoricalDtype(list("cab")))}) - res = pd.concat([df2, df2]) - exp = DataFrame( - { - "A": pd.concat([a, a]), - "B": pd.concat([b, b]).astype(CategoricalDtype(list("cab"))), - } - ) - tm.assert_frame_equal(res, exp) - - def test_categorical_index_preserver(self): - a = Series(np.arange(6, dtype="int64")) - b = Series(list("aabbca")) - - df2 = DataFrame( - {"A": a, "B": b.astype(CategoricalDtype(list("cab")))} - ).set_index("B") - result = pd.concat([df2, df2]) - expected = DataFrame( - { - "A": pd.concat([a, a]), - "B": pd.concat([b, b]).astype(CategoricalDtype(list("cab"))), - } - ).set_index("B") - tm.assert_frame_equal(result, expected) - - # wrong categories -> uses concat_compat, which casts to object - df3 = DataFrame( - {"A": a, "B": Categorical(b, categories=list("abe"))} - ).set_index("B") - result = pd.concat([df2, df3]) - expected = pd.concat( - [ - df2.set_axis(df2.index.astype(object), axis=0), - df3.set_axis(df3.index.astype(object), axis=0), - ] - ) - tm.assert_frame_equal(result, expected) - - def test_concat_categorical_tz(self): - # GH-23816 - a = Series(pd.date_range("2017-01-01", periods=2, tz="US/Pacific")) - b = Series(["a", "b"], dtype="category") - result = pd.concat([a, b], ignore_index=True) - expected = Series( - [ - pd.Timestamp("2017-01-01", tz="US/Pacific"), - pd.Timestamp("2017-01-02", tz="US/Pacific"), - "a", - "b", - ] - ) - tm.assert_series_equal(result, expected) - - def test_concat_categorical_datetime(self): - # GH-39443 - df1 = DataFrame( - {"x": Series(datetime(2021, 1, 1), index=[0], dtype="category")} - ) - df2 = DataFrame( - {"x": Series(datetime(2021, 1, 2), index=[1], dtype="category")} - ) - - result = pd.concat([df1, df2]) - expected = DataFrame( - {"x": Series([datetime(2021, 1, 1), datetime(2021, 1, 2)])} - ) - - tm.assert_equal(result, expected) - - def test_concat_categorical_unchanged(self): - # GH-12007 - # test fix for when concat on categorical and float - # coerces dtype categorical -> float - df = DataFrame(Series(["a", "b", "c"], dtype="category", name="A")) - ser = Series([0, 1, 2], index=[0, 1, 3], name="B") - result = pd.concat([df, ser], axis=1) - expected = DataFrame( - { - "A": Series(["a", "b", "c", np.nan], dtype="category"), - "B": Series([0, 1, np.nan, 2], dtype="float"), - } - ) - tm.assert_equal(result, expected) - - def test_categorical_concat_gh7864(self): - # GH 7864 - # make sure ordering is preserved - df = DataFrame({"id": [1, 2, 3, 4, 5, 6], "raw_grade": list("abbaae")}) - df["grade"] = Categorical(df["raw_grade"]) - df["grade"].cat.set_categories(["e", "a", "b"]) - - df1 = df[0:3] - df2 = df[3:] - - tm.assert_index_equal(df["grade"].cat.categories, df1["grade"].cat.categories) - tm.assert_index_equal(df["grade"].cat.categories, df2["grade"].cat.categories) - - dfx = pd.concat([df1, df2]) - tm.assert_index_equal(df["grade"].cat.categories, dfx["grade"].cat.categories) - - dfa = df1._append(df2) - tm.assert_index_equal(df["grade"].cat.categories, dfa["grade"].cat.categories) - - def test_categorical_index_upcast(self): - # GH 17629 - # test upcasting to object when concatinating on categorical indexes - # with non-identical categories - - a = DataFrame({"foo": [1, 2]}, index=Categorical(["foo", "bar"])) - b = DataFrame({"foo": [4, 3]}, index=Categorical(["baz", "bar"])) - - res = pd.concat([a, b]) - exp = DataFrame({"foo": [1, 2, 4, 3]}, index=["foo", "bar", "baz", "bar"]) - - tm.assert_equal(res, exp) - - a = Series([1, 2], index=Categorical(["foo", "bar"])) - b = Series([4, 3], index=Categorical(["baz", "bar"])) - - res = pd.concat([a, b]) - exp = Series([1, 2, 4, 3], index=["foo", "bar", "baz", "bar"]) - - tm.assert_equal(res, exp) - - def test_categorical_missing_from_one_frame(self): - # GH 25412 - df1 = DataFrame({"f1": [1, 2, 3]}) - df2 = DataFrame({"f1": [2, 3, 1], "f2": Series([4, 4, 4]).astype("category")}) - result = pd.concat([df1, df2], sort=True) - dtype = CategoricalDtype([4]) - expected = DataFrame( - { - "f1": [1, 2, 3, 2, 3, 1], - "f2": Categorical.from_codes([-1, -1, -1, 0, 0, 0], dtype=dtype), - }, - index=[0, 1, 2, 0, 1, 2], - ) - tm.assert_frame_equal(result, expected) - - def test_concat_categorical_same_categories_different_order(self): - # https://github.com/pandas-dev/pandas/issues/24845 - - c1 = pd.CategoricalIndex(["a", "a"], categories=["a", "b"], ordered=False) - c2 = pd.CategoricalIndex(["b", "b"], categories=["b", "a"], ordered=False) - c3 = pd.CategoricalIndex( - ["a", "a", "b", "b"], categories=["a", "b"], ordered=False - ) - - df1 = DataFrame({"A": [1, 2]}, index=c1) - df2 = DataFrame({"A": [3, 4]}, index=c2) - - result = pd.concat((df1, df2)) - expected = DataFrame({"A": [1, 2, 3, 4]}, index=c3) - tm.assert_frame_equal(result, expected) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/json.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/json.py deleted file mode 100644 index 23583871e8f2a466abec0bce1397fb495b9c212d..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/json.py +++ /dev/null @@ -1,140 +0,0 @@ -from json import loads, dumps -from typing import Any, Callable, Optional, Union - -from .text import Text -from .highlighter import JSONHighlighter, NullHighlighter - - -class JSON: - """A renderable which pretty prints JSON. - - Args: - json (str): JSON encoded data. - indent (Union[None, int, str], optional): Number of characters to indent by. Defaults to 2. - highlight (bool, optional): Enable highlighting. Defaults to True. - skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False. - ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False. - check_circular (bool, optional): Check for circular references. Defaults to True. - allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True. - default (Callable, optional): A callable that converts values that can not be encoded - in to something that can be JSON encoded. Defaults to None. - sort_keys (bool, optional): Sort dictionary keys. Defaults to False. - """ - - def __init__( - self, - json: str, - indent: Union[None, int, str] = 2, - highlight: bool = True, - skip_keys: bool = False, - ensure_ascii: bool = True, - check_circular: bool = True, - allow_nan: bool = True, - default: Optional[Callable[[Any], Any]] = None, - sort_keys: bool = False, - ) -> None: - data = loads(json) - json = dumps( - data, - indent=indent, - skipkeys=skip_keys, - ensure_ascii=ensure_ascii, - check_circular=check_circular, - allow_nan=allow_nan, - default=default, - sort_keys=sort_keys, - ) - highlighter = JSONHighlighter() if highlight else NullHighlighter() - self.text = highlighter(json) - self.text.no_wrap = True - self.text.overflow = None - - @classmethod - def from_data( - cls, - data: Any, - indent: Union[None, int, str] = 2, - highlight: bool = True, - skip_keys: bool = False, - ensure_ascii: bool = True, - check_circular: bool = True, - allow_nan: bool = True, - default: Optional[Callable[[Any], Any]] = None, - sort_keys: bool = False, - ) -> "JSON": - """Encodes a JSON object from arbitrary data. - - Args: - data (Any): An object that may be encoded in to JSON - indent (Union[None, int, str], optional): Number of characters to indent by. Defaults to 2. - highlight (bool, optional): Enable highlighting. Defaults to True. - default (Callable, optional): Optional callable which will be called for objects that cannot be serialized. Defaults to None. - skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False. - ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False. - check_circular (bool, optional): Check for circular references. Defaults to True. - allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True. - default (Callable, optional): A callable that converts values that can not be encoded - in to something that can be JSON encoded. Defaults to None. - sort_keys (bool, optional): Sort dictionary keys. Defaults to False. - - Returns: - JSON: New JSON object from the given data. - """ - json_instance: "JSON" = cls.__new__(cls) - json = dumps( - data, - indent=indent, - skipkeys=skip_keys, - ensure_ascii=ensure_ascii, - check_circular=check_circular, - allow_nan=allow_nan, - default=default, - sort_keys=sort_keys, - ) - highlighter = JSONHighlighter() if highlight else NullHighlighter() - json_instance.text = highlighter(json) - json_instance.text.no_wrap = True - json_instance.text.overflow = None - return json_instance - - def __rich__(self) -> Text: - return self.text - - -if __name__ == "__main__": - - import argparse - import sys - - parser = argparse.ArgumentParser(description="Pretty print json") - parser.add_argument( - "path", - metavar="PATH", - help="path to file, or - for stdin", - ) - parser.add_argument( - "-i", - "--indent", - metavar="SPACES", - type=int, - help="Number of spaces in an indent", - default=2, - ) - args = parser.parse_args() - - from pip._vendor.rich.console import Console - - console = Console() - error_console = Console(stderr=True) - - try: - if args.path == "-": - json_data = sys.stdin.read() - else: - with open(args.path, "rt") as json_file: - json_data = json_file.read() - except Exception as error: - error_console.print(f"Unable to read {args.path!r}; {error}") - sys.exit(-1) - - console.print(JSON(json_data, indent=args.indent), soft_wrap=True) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/amdgpu.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/amdgpu.py deleted file mode 100644 index 860dfd442150d1c557dfc8bfd7a92647ec8ce2f5..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/amdgpu.py +++ /dev/null @@ -1,54 +0,0 @@ -""" - pygments.lexers.amdgpu - ~~~~~~~~~~~~~~~~~~~~~~ - - Lexers for the AMDGPU ISA assembly. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.lexer import RegexLexer, words -from pygments.token import Name, Text, Keyword, Whitespace, Number, Comment - -import re - -__all__ = ['AMDGPULexer'] - - -class AMDGPULexer(RegexLexer): - """ - For AMD GPU assembly. - - .. versionadded:: 2.8 - """ - name = 'AMDGPU' - aliases = ['amdgpu'] - filenames = ['*.isa'] - - flags = re.IGNORECASE - - tokens = { - 'root': [ - (r'\s+', Whitespace), - (r'[\r\n]+', Text), - (r'(([a-z_0-9])*:([a-z_0-9])*)', Name.Attribute), - (r'(\[|\]|\(|\)|,|\:|\&)', Text), - (r'([;#]|//).*?\n', Comment.Single), - (r'((s_)?(scratch|ds|buffer|flat|image)_[a-z0-9_]+)', Keyword.Reserved), - (r'(_lo|_hi)', Name.Variable), - (r'(vmcnt|lgkmcnt|expcnt)', Name.Attribute), - (r'(attr[0-9].[a-z])', Name.Attribute), - (words(( - 'op', 'vaddr', 'vdata', 'off', 'soffset', 'srsrc', 'format', - 'offset', 'offen', 'idxen', 'glc', 'dlc', 'slc', 'tfe', 'lds', - 'lit', 'unorm'), suffix=r'\b'), Name.Attribute), - (r'(label_[a-z0-9]+)', Keyword), - (r'(_L[0-9]*)', Name.Variable), - (r'(s|v)_[a-z0-9_]+', Keyword), - (r'(v[0-9.]+|vcc|exec|v)', Name.Variable), - (r's[0-9.]+|s', Name.Variable), - (r'[0-9]+\.[^0-9]+', Number.Float), - (r'(0[xX][a-z0-9]+)|([0-9]+)', Number.Integer) - ] - } diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/tnt.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/tnt.py deleted file mode 100644 index 2251373c5a1bfc5e05f496ff3f8332edc58e08a7..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/tnt.py +++ /dev/null @@ -1,271 +0,0 @@ -""" - pygments.lexers.tnt - ~~~~~~~~~~~~~~~~~~~ - - Lexer for Typographic Number Theory. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re - -from pygments.lexer import Lexer -from pygments.token import Text, Comment, Operator, Keyword, Name, Number, \ - Punctuation, Error - -__all__ = ['TNTLexer'] - - -class TNTLexer(Lexer): - """ - Lexer for Typographic Number Theory, as described in the book - Gödel, Escher, Bach, by Douglas R. Hofstadter - - .. versionadded:: 2.7 - """ - - name = 'Typographic Number Theory' - url = 'https://github.com/Kenny2github/language-tnt' - aliases = ['tnt'] - filenames = ['*.tnt'] - - cur = [] - - LOGIC = set('⊃→]&∧^|∨Vv') - OPERATORS = set('+.⋅*') - VARIABLES = set('abcde') - PRIMES = set("'′") - NEGATORS = set('~!') - QUANTIFIERS = set('AE∀∃') - NUMBERS = set('0123456789') - WHITESPACE = set('\t \v\n') - - RULES = re.compile('''(?xi) - joining | separation | double-tilde | fantasy\\ rule - | carry[- ]over(?:\\ of)?(?:\\ line)?\\ ([0-9]+) | detachment - | contrapositive | De\\ Morgan | switcheroo - | specification | generalization | interchange - | existence | symmetry | transitivity - | add\\ S | drop\\ S | induction - | axiom\\ ([1-5]) | premise | push | pop - ''') - LINENOS = re.compile(r'(?:[0-9]+)(?:(?:, ?|,? and )(?:[0-9]+))*') - COMMENT = re.compile(r'\[[^\n\]]+\]') - - def __init__(self, *args, **kwargs): - Lexer.__init__(self, *args, **kwargs) - self.cur = [] - - def whitespace(self, start, text, required=False): - """Tokenize whitespace.""" - end = start - try: - while text[end] in self.WHITESPACE: - end += 1 - except IndexError: - end = len(text) - if required and end == start: - raise AssertionError - if end != start: - self.cur.append((start, Text, text[start:end])) - return end - - def variable(self, start, text): - """Tokenize a variable.""" - if text[start] not in self.VARIABLES: - raise AssertionError - end = start+1 - while text[end] in self.PRIMES: - end += 1 - self.cur.append((start, Name.Variable, text[start:end])) - return end - - def term(self, start, text): - """Tokenize a term.""" - if text[start] == 'S': # S...S(...) or S...0 - end = start+1 - while text[end] == 'S': - end += 1 - self.cur.append((start, Number.Integer, text[start:end])) - return self.term(end, text) - if text[start] == '0': # the singleton 0 - self.cur.append((start, Number.Integer, text[start])) - return start+1 - if text[start] in self.VARIABLES: # a''... - return self.variable(start, text) - if text[start] == '(': # (...+...) - self.cur.append((start, Punctuation, text[start])) - start = self.term(start+1, text) - if text[start] not in self.OPERATORS: - raise AssertionError - self.cur.append((start, Operator, text[start])) - start = self.term(start+1, text) - if text[start] != ')': - raise AssertionError - self.cur.append((start, Punctuation, text[start])) - return start+1 - raise AssertionError # no matches - - def formula(self, start, text): - """Tokenize a formula.""" - if text[start] in self.NEGATORS: # ~<...> - end = start+1 - while text[end] in self.NEGATORS: - end += 1 - self.cur.append((start, Operator, text[start:end])) - return self.formula(end, text) - if text[start] in self.QUANTIFIERS: # Aa:<...> - self.cur.append((start, Keyword.Declaration, text[start])) - start = self.variable(start+1, text) - if text[start] != ':': - raise AssertionError - self.cur.append((start, Punctuation, text[start])) - return self.formula(start+1, text) - if text[start] == '<': # <...&...> - self.cur.append((start, Punctuation, text[start])) - start = self.formula(start+1, text) - if text[start] not in self.LOGIC: - raise AssertionError - self.cur.append((start, Operator, text[start])) - start = self.formula(start+1, text) - if text[start] != '>': - raise AssertionError - self.cur.append((start, Punctuation, text[start])) - return start+1 - # ...=... - start = self.term(start, text) - if text[start] != '=': - raise AssertionError - self.cur.append((start, Operator, text[start])) - start = self.term(start+1, text) - return start - - def rule(self, start, text): - """Tokenize a rule.""" - match = self.RULES.match(text, start) - if match is None: - raise AssertionError - groups = sorted(match.regs[1:]) # exclude whole match - for group in groups: - if group[0] >= 0: # this group matched - self.cur.append((start, Keyword, text[start:group[0]])) - self.cur.append((group[0], Number.Integer, - text[group[0]:group[1]])) - if group[1] != match.end(): - self.cur.append((group[1], Keyword, - text[group[1]:match.end()])) - break - else: - self.cur.append((start, Keyword, text[start:match.end()])) - return match.end() - - def lineno(self, start, text): - """Tokenize a line referral.""" - end = start - while text[end] not in self.NUMBERS: - end += 1 - self.cur.append((start, Punctuation, text[start])) - self.cur.append((start+1, Text, text[start+1:end])) - start = end - match = self.LINENOS.match(text, start) - if match is None: - raise AssertionError - if text[match.end()] != ')': - raise AssertionError - self.cur.append((match.start(), Number.Integer, match.group(0))) - self.cur.append((match.end(), Punctuation, text[match.end()])) - return match.end() + 1 - - def error_till_line_end(self, start, text): - """Mark everything from ``start`` to the end of the line as Error.""" - end = start - try: - while text[end] != '\n': # there's whitespace in rules - end += 1 - except IndexError: - end = len(text) - if end != start: - self.cur.append((start, Error, text[start:end])) - end = self.whitespace(end, text) - return end - - def get_tokens_unprocessed(self, text): - """Returns a list of TNT tokens.""" - self.cur = [] - start = end = self.whitespace(0, text) - while start <= end < len(text): - try: - # try line number - while text[end] in self.NUMBERS: - end += 1 - if end != start: # actual number present - self.cur.append((start, Number.Integer, text[start:end])) - # whitespace is required after a line number - orig = len(self.cur) - try: - start = end = self.whitespace(end, text, True) - except AssertionError: - del self.cur[orig:] - start = end = self.error_till_line_end(end, text) - continue - # at this point it could be a comment - match = self.COMMENT.match(text, start) - if match is not None: - self.cur.append((start, Comment, text[start:match.end()])) - start = end = match.end() - # anything after the closing bracket is invalid - start = end = self.error_till_line_end(start, text) - # do not attempt to process the rest - continue - del match - if text[start] in '[]': # fantasy push or pop - self.cur.append((start, Keyword, text[start])) - start += 1 - end += 1 - else: - # one formula, possibly containing subformulae - orig = len(self.cur) - try: - start = end = self.formula(start, text) - except (AssertionError, RecursionError): # not well-formed - del self.cur[orig:] - while text[end] not in self.WHITESPACE: - end += 1 - self.cur.append((start, Error, text[start:end])) - start = end - # skip whitespace after formula - orig = len(self.cur) - try: - start = end = self.whitespace(end, text, True) - except AssertionError: - del self.cur[orig:] - start = end = self.error_till_line_end(start, text) - continue - # rule proving this formula a theorem - orig = len(self.cur) - try: - start = end = self.rule(start, text) - except AssertionError: - del self.cur[orig:] - start = end = self.error_till_line_end(start, text) - continue - # skip whitespace after rule - start = end = self.whitespace(end, text) - # line marker - if text[start] == '(': - orig = len(self.cur) - try: - start = end = self.lineno(start, text) - except AssertionError: - del self.cur[orig:] - start = end = self.error_till_line_end(start, text) - continue - start = end = self.whitespace(start, text) - except IndexError: - try: - del self.cur[orig:] - except NameError: - pass # if orig was never defined, fine - self.error_till_line_end(start, text) - return self.cur diff --git a/spaces/pycoming/bingo/src/components/ui/icons.tsx b/spaces/pycoming/bingo/src/components/ui/icons.tsx deleted file mode 100644 index 742b489b50437c5b64c86082f2ebc712eeb6a2b0..0000000000000000000000000000000000000000 --- a/spaces/pycoming/bingo/src/components/ui/icons.tsx +++ /dev/null @@ -1,504 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' - -function IconNextChat({ - className, - inverted, - ...props -}: React.ComponentProps<'svg'> & { inverted?: boolean }) { - const id = React.useId() - - return ( - - - - - - - - - - - - - - - - - - - - - - ) -} - -function IconOpenAI({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - OpenAI icon - - - ) -} - -function IconGitHub({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - GitHub - - - ) -} - -function IconSeparator({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - ) -} - -function IconArrowDown({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowRight({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUser({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconPlus({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowElbow({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSpinner({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMessage({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconTrash({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMore({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconRefresh({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconStop({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSidebar({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMoon({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSun({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCopy({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCheck({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconDownload({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconClose({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconEdit({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconShare({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUsers({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconExternalLink({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconChevronUpDown({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -export { - IconEdit, - IconNextChat, - IconOpenAI, - IconGitHub, - IconSeparator, - IconArrowDown, - IconArrowRight, - IconUser, - IconPlus, - IconArrowElbow, - IconSpinner, - IconMessage, - IconTrash, - IconMore, - IconRefresh, - IconStop, - IconSidebar, - IconMoon, - IconSun, - IconCopy, - IconCheck, - IconDownload, - IconClose, - IconShare, - IconUsers, - IconExternalLink, - IconChevronUpDown -} diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Battlestar Galactica (Mini-Series) (DVD-Rip) PATCHED.md b/spaces/quidiaMuxgu/Expedit-SAM/Battlestar Galactica (Mini-Series) (DVD-Rip) PATCHED.md deleted file mode 100644 index 4d4398d6ed90f5873cf240ecd449c20a9b5d8db8..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Battlestar Galactica (Mini-Series) (DVD-Rip) PATCHED.md +++ /dev/null @@ -1,20 +0,0 @@ - -

    Battlestar Galactica: The Mini-Series That Reimagined Sci-Fi TV

    -

    Battlestar Galactica is a sci-fi television franchise that began with a 1978 series of the same name. The original series followed the survivors of a war between humans and their robotic creations, the Cylons, as they searched for a mythical planet called Earth. The series was cancelled after one season, but spawned a short-lived spin-off, Galactica 1980, and a cult following.

    -

    In 2003, the franchise was revived with a mini-series that reimagined the premise and characters of the original series. The mini-series was written by Ronald D. Moore and David Eick, who had previously worked on Star Trek: The Next Generation and Star Trek: Deep Space Nine. The mini-series was directed by Michael Rymer, who had directed films such as Queen of the Damned and Perfume.

    -

    Battlestar Galactica (Mini-Series) (DVD-Rip)


    Downloadhttps://geags.com/2uCqk6



    -

    The mini-series was a critical and commercial success, attracting over 18 million viewers on its premiere on the Sci-Fi Channel. It was praised for its gritty realism, complex characters, political allegory, and stunning visual effects. It also featured a diverse and talented cast, including Edward James Olmos as Commander William Adama, Mary McDonnell as President Laura Roslin, Katee Sackhoff as Lieutenant Kara "Starbuck" Thrace, Jamie Bamber as Captain Lee "Apollo" Adama, James Callis as Dr. Gaius Baltar, Tricia Helfer as Number Six, Grace Park as Lieutenant Sharon "Boomer" Valerii, and Michael Hogan as Colonel Saul Tigh.

    -

    The mini-series served as a pilot for a new series that ran for four seasons from 2004 to 2009. The series continued to explore the themes and conflicts of the mini-series, while expanding the mythology and lore of the Battlestar Galactica universe. The series won numerous awards and accolades, including Peabody Awards, Emmy Awards, Saturn Awards, Hugo Awards, and a United Nations citation for its portrayal of human rights issues.

    -

    The mini-series is available on DVD in various regions and formats. The DVD includes the two-part mini-series with a total running time of 183 minutes. It also includes special features such as a commentary by Moore, Eick and Rymer, a behind-the-scenes documentary called The Miniseries Lowdown, deleted scenes, sketches and art work.[^1^] [^2^] [^3^]

    -

    If you are a fan of sci-fi TV or want to experience a thrilling and thought-provoking story of survival and humanity, you should check out Battlestar Galactica: The Mini-Series. It is one of the best examples of how to reboot a classic franchise with respect and creativity.

    - -

    Battlestar Galactica: The Final Five

    -

    One of the most intriguing mysteries of Battlestar Galactica was the identity and origin of the Final Five Cylons. These five models were unknown to the other seven Cylons, who were forbidden to seek them out or even think about them. The Final Five were also unaware of their true nature, living among the human survivors as sleeper agents. Their memories were suppressed by a pact they made with the other Cylons to end the first Cylon War.

    -

    The Final Five were gradually revealed over the course of the series, starting with a vision that D'Anna Biers, a Number Three model, had in the Temple of Five on an algae planet. She saw the faces of the Final Five, but was unable to share her knowledge with anyone else, as her entire line was boxed by the Cylon leadership for her unauthorized quest. The other four were activated by a mysterious musical signal that triggered their latent memories during the trial of Gaius Baltar. They were Samuel Anders, a former resistance fighter and pilot; Tory Foster, an aide to President Laura Roslin; Galen Tyrol, a chief mechanic and union leader; and Saul Tigh, a veteran officer and executive officer of Galactica. The fifth was later revealed to be Ellen Tigh, Saul's wife, who had died on New Caprica but was resurrected by the Cylons.

    -

    The Final Five had a different origin than the other seven Cylons. They were actually descendants of the Thirteenth Tribe, a group of humanoid Cylons who left Kobol thousands of years ago and settled on a planet they called Earth. There, they developed their own civilization and culture, until they repeated the cycle of creating artificial life forms that rebelled against them and destroyed their world. The Final Five were part of a team of scientists who worked on resurrecting technology and managed to escape the nuclear holocaust by downloading into a ship in orbit. They then embarked on a long journey to find the other twelve tribes and warn them about the dangers of creating Cylons.

    -

    -

    Along the way, they encountered another group of Cylons who had rebelled against their human creators in the Twelve Colonies. These Cylons had developed their own resurrection technology, but lacked the ability to procreate or diversify their models. The Final Five offered to help them create new models that could reproduce sexually and have individual personalities, in exchange for ending the war with humanity. The other Cylons agreed, and together they created eight new models: Numbers One through Eight. However, one of them, Number One or John Cavil, resented his human-like form and betrayed the Final Five. He killed them and resurrected them with false memories on the Colonies, while erasing any knowledge of them from the other models.

    -

    The Final Five eventually reunited with each other and with their creations, and played a crucial role in ending the second Cylon War and finding a new home for both humans and Cylons. They also discovered that they were not the only ones who had been manipulated by Cavil. Kara Thrace, a human pilot who had died and returned with a new body and a mysterious destiny, was revealed to be a hybrid created by Daniel, a Number Seven model who was also killed by Cavil. Thrace led the survivors to a habitable planet that they named Earth, where they decided to abandon their technology and start anew.

    -

    The Final Five were some of the most complex and fascinating characters in Battlestar Galactica. They embodied the themes of identity, destiny, memory, and redemption that pervaded the series. They also showed that humans and Cylons were not so different after all, and that they could coexist peacefully if they chose to do so.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Dsls Licgen Ssq Solid Squad Catia.md b/spaces/quidiaMuxgu/Expedit-SAM/Dsls Licgen Ssq Solid Squad Catia.md deleted file mode 100644 index 1f1c415cb68840e2601d3d0c6200dcb6912ea152..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Dsls Licgen Ssq Solid Squad Catia.md +++ /dev/null @@ -1,40 +0,0 @@ -

    Dsls Licgen Ssq Solid Squad Catia


    DOWNLOADhttps://geags.com/2uCsEW



    -
    -[not rated] te.t water service - -I am looking for general contractor in Brooklyn, who can do renovation for me. The project is going to be a new flat on the top floor of the building, which can be rented for $4,000 per month. - -This building is located in Midwood area. - -...freelance designers that can convert our existing web template design into responsive (mobile & desktop) versions. - -Also, you'll be responsible for the design of the print version of the website (e.g. posters, flyers, business cards) - -The file we provided you will be a PSD. We expect all of the layouts to be done in Sketch, and all of the - -PLEASE READ THIS. - -We are looking for a qualified graphic designer to design a logo for a small local business. We are a SaaS app designer that will create a new design every few weeks. The logo must be functional (Easily scalable), clean, and intuitive. It must have a lot of personality. We would like the logo to feel like a signature. - -The project - -I need a nice Bootstrap template for my upcoming site. - -If you don't have much experience, but are well versed in Bootstrap, you can consider bidding. But please be aware that this is not an easy job, and there is no way for me to calculate the price for you. The reason is simple: We are a group of students, and are looking for a long-term collaboration. - -...any graphic designer who can make a logo for our company with unique and simple design with a picture of a riding horse. We want a simple, graphic design. - -We will pay very well for this job. - -...need to be able to deliver the material in the same format you see it on the [iniciar sesión para ver URL] is a modern-looking, user-friendly template for Instagram that's fully responsive, just like [iniciar sesión para ver URL] - -The job is to help us create a design that is modern, clean, and user friendly, while still maintaining our photography and - -...which already exists. - -2. I would like the design to use the [iniciar sesión para ver URL] as a base template. I need the design to follow the exact style, layout, colors, fonts, etc. - -3. I will pay you $25 to start the project and $25 each 4fefd39f24
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/HD Online Player (Monstre Et Compagnie 2 French 720p T) __TOP__.md b/spaces/quidiaMuxgu/Expedit-SAM/HD Online Player (Monstre Et Compagnie 2 French 720p T) __TOP__.md deleted file mode 100644 index f7520e5d327160280259ef5e3f78842379e53ef6..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/HD Online Player (Monstre Et Compagnie 2 French 720p T) __TOP__.md +++ /dev/null @@ -1,6 +0,0 @@ -

    HD Online Player (Monstre Et Compagnie 2 French 720p T)


    Downloadhttps://geags.com/2uCryy



    -
    -Girl Humping Pillow Orgasm Addict Ani Humps Pillow And Cums 4 Times In 2 Minutes. ... Now I don't want anyone reading this and feeling worse about I'm talking full on my ... Download Tranny Adult Joanna Jet, Third World Media studio in HD 720p: ... Free Live Sex Camsb- Online Adult Chat and XXX Online Porn cdat. 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/It Is Not Found Any File Specified For Isarcextract Full.md b/spaces/quidiaMuxgu/Expedit-SAM/It Is Not Found Any File Specified For Isarcextract Full.md deleted file mode 100644 index 7702d61d93df610f47c141621162f30704de1968..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/It Is Not Found Any File Specified For Isarcextract Full.md +++ /dev/null @@ -1,61 +0,0 @@ - -

    It Is Not Found Any File Specified For Isarcextract Full: What Does It Mean and How to Fix It?

    - -

    If you are a PC gamer, you may have encountered an error message that says "It is not found any file specified for ISArcExtract" when you try to install some games on your Windows 10 or 11 system. This error is related to the ISDone.dll file, which is a dynamic link library that is used to extract large archive files. When this error occurs, it means that your system is unable to extract the archive files properly and prevents you from installing the game. This can be very frustrating, especially if you have been waiting for a long time to play your favorite game.

    - -

    Fortunately, there are some possible solutions that can help you fix this error and enjoy your gaming experience. In this article, we will explain what causes this error, what are its symptoms, and how to fix it with some simple methods. Read on to find out more.

    -

    It Is Not Found Any File Specified For Isarcextract Full


    Download >>> https://geags.com/2uCr0a



    - -

    What Causes the ISArcExtract Error?

    - -

    The ISArcExtract error can be caused by various factors, such as:

    - -
      -
    • Corrupted or incomplete archive files: If the archive files that contain the game data are corrupted or incomplete, they may not be extracted correctly by the ISDone.dll file. This can happen due to a faulty download, a damaged disk, or a virus infection.
    • -
    • Corrupted or missing ISDone.dll file: If the ISDone.dll file itself is corrupted or missing from your system, it may not be able to perform its function properly. This can happen due to a faulty installation, a malware attack, or a registry error.
    • -
    • Lack of system resources: If your system does not have enough memory, disk space, or processing power to extract the archive files, it may fail to do so and trigger the error. This can happen due to a low-end hardware configuration, a high CPU usage, or a large number of background programs.
    • -
    • Administrative or compatibility issues: If you do not have the required permissions or compatibility settings to run the game's setup file, it may not be able to access the archive files and extract them. This can happen due to an outdated Windows version, a restricted user account, or an incompatible game version.
    • -
    - -

    What Are the Symptoms of the ISArcExtract Error?

    - -

    The main symptom of the ISArcExtract error is that you will see an error message that says "It is not found any file specified for ISArcExtract" when you try to install a game on your Windows 10 or 11 system. The error message may also include some additional information, such as the name of the archive file that could not be extracted, the size of the archive file, or the CRC check value.

    - -

    The error message will prevent you from installing the game and may also cause your system to freeze or crash. You may also notice some other issues, such as slow performance, high CPU usage, or disk errors.

    - -

    How to Fix the ISArcExtract Error?

    - -

    There are several methods that can help you fix the ISArcExtract error and install your game successfully. Here are some of them:

    - -
      -
    1. Run the game's installer with admin rights: One of the simplest ways to fix this error is to run the game's setup file with administrative privileges. This will ensure that there are no permission issues that can interfere with the extraction process. To do this, follow these steps:
    2. -
        -
      • Locate the folder that contains the game's setup.exe file.
      • -
      • Right-click on the setup.exe file and select Run as administrator from the context menu.
      • -
      • Follow the on-screen instructions to install the game.
      • -
      -
    3. Run the game's installer in compatibility mode: Another possible way to fix this error is to run the game's setup file in compatibility mode. This will make sure that there are no compatibility issues that can prevent the extraction process. To do this, follow these steps:
    4. -
        -
      • Locate the folder that contains the game's setup.exe file.
      • -
      • Right-click on the setup.exe file and select Properties from the context menu.
      • -
      • Switch to the Compatibility tab and check the box next to Run this program in compatibility mode for.
      • -
      • Select a compatible Windows version from the drop-down menu. For example, if your game was released before Windows 10, you can try selecting Windows 7 or Windows XP.
      • -
      • Click Apply and OK to save the changes.
      • -
      • Run the setup.exe file as usual and install the game.
      • -
      -
    5. Set

      -

      - -

      There are several methods that can help you fix this error and install your game successfully. You can try running the game's installer with admin rights, running the game's installer in compatibility mode, setting the game's setup wizard at high priority, or running system file and image scans. These methods can resolve the common causes of this error, such as corrupted or incomplete archive files, corrupted or missing ISDone.dll file, lack of system resources, or administrative or compatibility issues.

      -

      - -

      We hope this article has helped you understand what causes the ISArcExtract error and how to fix it. If you have any questions or suggestions, feel free to leave a comment below.

      -

      Summary

      - -

      The ISArcExtract error is a problem that many PC gamers face when they try to install some games on their Windows 10 or 11 systems. This error is related to the ISDone.dll file, which is used to extract large archive files. When this error occurs, it means that your system is unable to extract the archive files properly and prevents you from installing the game.

      - -

      There are several methods that can help you fix this error and install your game successfully. You can try running the game's installer with admin rights, running the game's installer in compatibility mode, setting the game's setup wizard at high priority, or running system file and image scans. These methods can fix the common causes of this error, such as corrupted or incomplete archive files, corrupted or missing ISDone.dll file, lack of system resources, or administrative or compatibility issues.

      - -

      This article has explained what causes the ISArcExtract error and how to fix it. If you have any questions or suggestions, feel free to leave a comment below.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/infer_pack/modules/F0Predictor/DioF0Predictor.py deleted file mode 100644 index 54c2fd2484c3d52c3dc9bb4c88e5c102fa686fdc..0000000000000000000000000000000000000000 --- a/spaces/r3gm/Aesthetic_RVC_Inference_HF/lib/infer/infer_libs/infer_pack/modules/F0Predictor/DioF0Predictor.py +++ /dev/null @@ -1,91 +0,0 @@ -import numpy as np -import pyworld - -from lib.infer.infer_libs.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor - - -class DioF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/r3gm/SoniTranslate_translate_audio_of_a_video_content/lib/audio.py b/spaces/r3gm/SoniTranslate_translate_audio_of_a_video_content/lib/audio.py deleted file mode 100644 index 93c06c17513af60fb38bf2c0a61a9c9fb6c7a96a..0000000000000000000000000000000000000000 --- a/spaces/r3gm/SoniTranslate_translate_audio_of_a_video_content/lib/audio.py +++ /dev/null @@ -1,21 +0,0 @@ -import ffmpeg -import numpy as np - - -def load_audio(file, sr): - try: - # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26 - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - file = ( - file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # To prevent beginners from copying paths with leading or trailing spaces, quotation marks, and line breaks. - out, _ = ( - ffmpeg.input(file, threads=0) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run(cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True) - ) - except Exception as e: - raise RuntimeError(f"Failed to load audio: {e}") - - return np.frombuffer(out, np.float32).flatten() diff --git a/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/demo/analyze/features/basic.py b/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/demo/analyze/features/basic.py deleted file mode 100644 index f7d3dbae466c5aa3f33866b82cd51e470b7111b1..0000000000000000000000000000000000000000 --- a/spaces/radames/SPIGA-face-alignment-headpose-estimator/SPIGA/spiga/demo/analyze/features/basic.py +++ /dev/null @@ -1,40 +0,0 @@ - -class ObjectAnalyzed: - - def __init__(self): - # Processor addons - self.attributes = [] - self.drawers = [] - - def has_processor(self): - if len(self.attributes) > 0: - return True - else: - return False - - def plot_features(self, image, plotter, show_attributes): - for drawer in self.drawers: - image = drawer(image, self, plotter, show_attributes) - return image - - def get_attributes(self, names=None): - - # Initialization by input type - single_name = False - if names is None: - names = self.attributes - elif isinstance(names, str): - names = [names] - single_name = True - - attributes = {} - attribute = [] - for name in names: - if name in self.attributes and name in self.__dict__.keys(): - attribute = getattr(self, name) - attributes[name] = attribute - - if single_name: - return attribute - else: - return attributes \ No newline at end of file diff --git a/spaces/radames/sentence-embeddings-visualization/umap_reducer.py b/spaces/radames/sentence-embeddings-visualization/umap_reducer.py deleted file mode 100644 index 68242ae72f189a762aa343ca7ef052b1dd527956..0000000000000000000000000000000000000000 --- a/spaces/radames/sentence-embeddings-visualization/umap_reducer.py +++ /dev/null @@ -1,27 +0,0 @@ -import umap -import hdbscan -import copy - - -class UMAPReducer: - def __init__(self, umap_options={}, cluster_options={}): - - # set options with defaults - self.umap_options = {'n_components': 2, 'spread': 1, 'min_dist': 0.1, 'n_neighbors': 15, - 'metric': 'cosine', "verbose": True, **umap_options} - self.cluster_options = {'allow_single_cluster': True, 'min_cluster_size': 500, 'min_samples': 10, **cluster_options} - - def setParams(self, umap_options={}, cluster_options={}): - # update params - self.umap_options = {**self.umap_options, **umap_options} - self.cluster_options = {**self.cluster_options, **cluster_options} - - def clusterAnalysis(self, data): - print("Cluster params:", self.cluster_options) - clusters = hdbscan.HDBSCAN().fit(data) # **self.cluster_options - return clusters - - def embed(self, data): - print("UMAP params:", self.umap_options) - result = umap.UMAP(**self.umap_options).fit_transform(data) - return result diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Autoclosets crack How to design and sell closets with ease.md b/spaces/raedeXanto/academic-chatgpt-beta/Autoclosets crack How to design and sell closets with ease.md deleted file mode 100644 index 5d4c744dad8dfdc62493cbdc56f5e5bd6fa14b4b..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Autoclosets crack How to design and sell closets with ease.md +++ /dev/null @@ -1,100 +0,0 @@ -
      -
      - Benefits of using autoclosets for closet design and sales | | H2: How to get autoclosets for free with a crack? | - Risks and drawbacks of using a cracked version of autoclosets
      - Legal and ethical issues of software piracy | | H3: What are the alternatives to autoclosets crack? | - Free trial version of autoclosets
      - Other free or low-cost closet design software | | H4: How to choose the best closet design software for your needs? | - Factors to consider when comparing different software options
      - Tips and recommendations for finding the best software for your budget and preferences | | H2: Conclusion | - Summary of the main points
      - Call to action for the readers | **Table 2: Article with HTML formatting**

      What is autoclosets and why do you need it?

      -

      If you are looking for a fast and easy way to design and sell custom closets, you might have heard of autoclosets, a software program that helps you create realistic 3D models of your closet projects. Autoclosets is a powerful tool that allows you to:

      -

      autoclosets crack


      DOWNLOADhttps://tinourl.com/2uL3l3



      -
        -
      • Choose from a wide range of elements, such as drawers, shelves, accessories, etc.
      • -
      • Customize the dimensions, colors, materials, and finishes of your closet components
      • -
      • Generate detailed plans and elevations of your closet designs
      • -
      • Create stunning 3D images and videos of your closet projects
      • -
      • Print or export your closet designs in various formats
      • -
      • Manage your orders, invoices, and inventory
      • -
      -

      With autoclosets, you can save time and money, impress your clients, and increase your sales. Autoclosets is a software program that is designed specifically for closet manufacturers and storage space planners. It is compatible with Windows operating systems and requires a license key to activate.

      -

      How to get autoclosets for free with a crack?

      -

      You might be tempted to download a cracked version of autoclosets from the internet, hoping to get all the benefits of the software without paying for it. However, this is not a good idea for several reasons:

      -
        -
      • A cracked version of autoclosets might not work properly or have some features disabled. You might encounter errors, bugs, or crashes that could ruin your work or damage your computer.
      • -
      • A cracked version of autoclosets might contain viruses, malware, or spyware that could infect your computer or steal your personal information. You might expose yourself to identity theft, fraud, or ransomware attacks.
      • -
      • A cracked version of autoclosets might violate the intellectual property rights of the software developer. You might face legal consequences, such as fines or lawsuits, for using pirated software.
      • -
      • A cracked version of autoclosets might be unethical and unfair to the software developer. You might deprive them of their rightful income and discourage them from creating more quality software in the future.
      • -
      -

      Therefore, using a cracked version of autoclosets is not worth the risk or the hassle. You might end up losing more than you gain by trying to get something for nothing.

      -

      autoclosets LT download
      -autoclosets LT 8.0
      -autoclosets LT serial
      -autoclosets LT keygen
      -autoclosets LT free
      -autoclosets LT full version
      -autoclosets LT software online
      -autoclosets LT coupon
      -autoclosets LT design software
      -autoclosets LT 3D
      -autoclosets LT review
      -autoclosets LT tutorial
      -autoclosets LT price
      -autoclosets LT trial
      -autoclosets LT patch
      -autoclosets LT crack download
      -autoclosets LT crack keygen
      -autoclosets LT crack serial
      -autoclosets LT crack free
      -autoclosets LT crack full version
      -autoclosets LT crack software online
      -autoclosets LT crack coupon
      -autoclosets LT crack design software
      -autoclosets LT crack 3D
      -autoclosets LT crack review
      -autoclosets LT crack tutorial
      -autoclosets LT crack price
      -autoclosets LT crack trial
      -autoclosets LT crack patch
      -how to download autoclosets LT for free
      -how to install autoclosets LT for free
      -how to use autoclosets LT for free
      -how to get autoclosets LT for free
      -how to crack autoclosets LT for free
      -how to activate autoclosets LT for free
      -how to design closets with autoclosets LT for free
      -how to create 3D closets with autoclosets LT for free
      -how to generate plans and images with autoclosets LT for free
      -how to calculate budget with autoclosets LT for free
      -how to customize closets with autoclosets LT for free
      -best alternatives to autoclosets LT for free
      -best closet design software for free
      -best 3D closet design software for free
      -best closet design software online for free
      -best closet design software download for free
      -best closet design software review for free
      -best closet design software tutorial for free
      -best closet design software price for free
      -best closet design software trial for free

      -

      What are the alternatives to autoclosets crack?

      -

      If you want to use autoclosets without breaking the law or compromising your security, there are some alternatives that you can consider:

      -
        -
      • You can download a free trial version of autoclosets from the official website. The trial version allows you to use all the features of autoclosets for 30 days without any limitations. You can test the software and see if it meets your needs before buying it.
      • -
      • You can use other free or low-cost closet design software that are available online. Some examples are SketchUp, RoomSketcher, SmartDraw, etc. These software programs have different features and functionalities than autoclosets, but they might suit your purposes depending on what you are looking for.
      • -
      -

      How to choose the best closet design software for your needs?

      -

      When comparing different closet design software options, there are some factors that you should consider:

      -
        -
      • The ease of use of the software. You should look for a software program that has a user-friendly interface, intuitive controls, and clear instructions. You should be able to learn how to use the software quickly and easily.
      • -
      • The quality and variety of the elements, materials, and finishes that are available in the software. You should look for a software program that has a large and diverse catalog of closet components that you can customize according to your preferences.
      • -
      • The accuracy and realism of the plans, elevations, 3D images, and videos that are generated by the software. You should look for a software program that produces high-quality graphics that reflect your closet designs faithfully.
      • -
      • The compatibility and compatibility of the software with other programs and devices. You should look for a software program that works well with your operating system and hardware specifications. You should also look for a software program that allows you to export or print your closet designs in various formats.
      • -
      • The price and support of the software. You should look for a software program that fits your budget and offers good value for money. You should also look for a software program that has reliable customer service and technical support in case you encounter any problems or have any questions.
      • -
      -

      Conclusion

      -

      In conclusion, autoclosets is a great software program for closet design and sales, but it is not free. Using a cracked version of autoclosets is risky, illegal, and unethical. Instead, you can try the free trial version of autoclosets or use other free or low-cost closet design software alternatives. To choose the best closet design software for your needs, you should consider factors such as ease of use, quality and variety, accuracy and realism, compatibility and compatibility and price and support.

      -

      We hope this article has been helpful and informative for you. If you have any questions or comments about autoclosets or closet design software in general, please feel free to leave them below. Thank you for reading!

      -

      Frequently Asked Questions

      -
        -
      1. What is autoclosets?

        Autoclosets is a software program that helps you design and sell custom closets by creating realistic 3D models of your closet projects.
      2. -
      3. How much does autoclosets cost?

        The price of autoclosets depends on the version and license type that you choose. You can check the pricing details on the official website.
      4. -
      5. How can I get autoclosets for free?

        You can download a free trial version of autoclosets from the official website that allows you to use all the features of autoclosets for 30 days without any limitations.
      6. -
      7. Is it safe to use a cracked version of autoclosets?

        No, it is not safe to use a cracked version of autoclosets because it might not work properly or have some features disabled; it might contain viruses, malware or spyware that could infect your computer or steal your personal information; it might violate the intellectual property rights of the software developer; and it might be unethical and unfair to the software developer.
      8. -
      9. What are some other closet design software options?

        You can use other free or low-cost closet design software options that are available online, such as SketchUp, RoomSketcher, SmartDraw, etc. These software programs have different features and functionalities than autoclosets, but they might suit your purposes depending on what you are looking for.
      10. -
      -

      0a6ba089eb
      -
      -
      \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Draft Day Sports Pro Basketball 2019 Download] [Torrent] ((HOT)).md b/spaces/raedeXanto/academic-chatgpt-beta/Draft Day Sports Pro Basketball 2019 Download] [Torrent] ((HOT)).md deleted file mode 100644 index d5f54e8b813ab403ad50a360caf89c15560f8012..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Draft Day Sports Pro Basketball 2019 Download] [Torrent] ((HOT)).md +++ /dev/null @@ -1,19 +0,0 @@ - -

      Draft Day Sports: Pro Basketball 2019: A Text Sim for Basketball Purists

      -

      If you are a fan of basketball and text sims, you might want to check out Draft Day Sports: Pro Basketball 2019, a game that puts you in the role of general manager of your favorite professional basketball team. You can make every decision from drafting, trading, signing free agents, and making substitutions to lead your team to championship glory and become a dynasty.

      -

      Draft Day Sports: Pro Basketball 2019 is developed by Wolverine Studios, a company that specializes in sports simulation games. The game features an immersive and realistic gameplay that simulates the NBA season, playoffs, draft, free agency, and more. You can also customize your league with historical or fictional teams and players, or import rosters from other sources.

      -

      Draft Day Sports: Pro Basketball 2019 Download] [Torrent]


      Download File https://tinourl.com/2uL2la



      -

      The game has received positive reviews from users and critics alike. According to Operation Sports, "Draft Day Sports: Pro Basketball 2019 is something that was made for me. So Is it worth the price tag for people who are looking for a text sim? Absolutely." The game has also been praised for its depth, detail, and user interface.

      -

      If you are interested in trying out Draft Day Sports: Pro Basketball 2019, you can buy it on Steam for $19.99 or download it from torrent sites. However, we do not condone piracy and recommend that you support the developers by purchasing the game legally. You can also visit the official website or the Steam community page for more information and updates.

      - -

      But how do you play Draft Day Sports: Pro Basketball 2019? What are some tips and tricks to help you succeed in this game? Here are some suggestions that might help you become a better general manager and coach.

      -
        -
      • Know your team's strengths and weaknesses. Every team has a different roster, budget, and fan base. You need to evaluate your players' skills, contracts, and personalities, and decide who to keep, trade, or release. You also need to balance your spending and revenue, and keep your fans happy and loyal.
      • -
      • Plan ahead for the draft and free agency. The draft and free agency are two of the most important events in the game, as they can change the fate of your franchise. You need to scout the available prospects and free agents, and rank them according to your needs and preferences. You also need to be aware of the salary cap and the luxury tax, and avoid overspending or underpaying.
      • -
      • Use the advanced options and tools. The game offers a lot of options and tools to help you customize your experience and optimize your performance. You can adjust the game settings, such as the difficulty level, the simulation speed, and the league rules. You can also use the editor to create or edit teams, players, logos, jerseys, courts, and more. You can also import or export rosters, schedules, drafts, and other data.
      • -
      • Watch the games or simulate them. The game allows you to watch the games in real time or simulate them instantly. You can choose to control your team's substitutions, play calling, and strategy during the games, or let the AI handle them for you. You can also view the box scores, play-by-play logs, stats, standings, awards, and records after each game.
      • -
      • Learn from your mistakes and successes. The game is not easy, and you will face many challenges and obstacles along the way. You will make mistakes and lose games, but you will also have successes and win games. You need to learn from your experiences and improve your skills as a general manager and coach. You also need to have fun and enjoy the game.
      • -
      -

      Draft Day Sports: Pro Basketball 2019 is a game that will appeal to basketball fans who love text sims. It is a game that will test your knowledge, strategy, and creativity as you try to build your own dynasty. It is a game that will give you hours of entertainment and satisfaction.

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Rdsharmaclass9mathsbookpdfdownload-PATCHED.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Rdsharmaclass9mathsbookpdfdownload-PATCHED.md deleted file mode 100644 index fa56ab356d88e02215d23f4ab629f2edee38f71e..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/Rdsharmaclass9mathsbookpdfdownload-PATCHED.md +++ /dev/null @@ -1,104 +0,0 @@ -## Rdsharmaclass9mathsbookpdfdownload - - - - - - ![Rdsharmaclass9mathsbookpdfdownload PATCHED](https://cdn.shopify.com/s/files/1/0565/3558/0868/products/image_914c04c2-2e18-42f0-88b3-d95a9fd2a2d3.jpg?v\u003d1676961133) - - - - - -**Rdsharmaclass9mathsbookpdfdownload ✅ [https://lodystiri.blogspot.com/?file=2txtm3](https://lodystiri.blogspot.com/?file=2txtm3)** - - - - - - - - - - - - - -# How to Download RD Sharma Class 9 Maths Book PDF for Free - - - -If you are looking for a comprehensive and easy-to-understand maths book for class 9, then RD Sharma is one of the best options. RD Sharma is a renowned author and teacher who has written many books for CBSE and ICSE students. His books cover the latest syllabus and exam pattern and provide ample practice questions and examples. - - - -However, buying a physical copy of RD Sharma class 9 maths book can be expensive and inconvenient. That's why many students prefer to download the PDF version of the book online. But how can you find a reliable and legal source to download RD Sharma class 9 maths book PDF for free? - - - -In this article, we will share some tips and tricks to help you download RD Sharma class 9 maths book PDF for free without any hassle. Follow these steps and enjoy learning maths with RD Sharma. - - - -## Step 1: Visit the official website of RD Sharma - - - -The first and most obvious step is to visit the official website of RD Sharma at [www.rdsharma.com](https://www.rdsharma.com/). Here you will find all the information about his books, including the class 9 maths book. You can also browse through the contents and sample pages of the book to get an idea of what it covers. - - - -On the website, you will also find a link to buy the book online from various e-commerce platforms like Amazon, Flipkart, etc. However, if you want to download the PDF version of the book for free, you will have to look elsewhere. - - - -## Step 2: Search for RD Sharma class 9 maths book PDF on Google - - - -The next step is to search for RD Sharma class 9 maths book PDF on Google or any other search engine of your choice. You will get many results that claim to offer the PDF version of the book for free. However, not all of them are trustworthy or legal. Some of them may contain viruses, malware, or spam that can harm your device or compromise your privacy. Some of them may also ask you to register or pay a fee before downloading the PDF. - - - -Therefore, you need to be careful and selective while choosing a website to download RD Sharma class 9 maths book PDF for free. Here are some tips to help you identify a reliable and legal source: - - - -- Check the domain name and extension of the website. Avoid websites that have suspicious or unfamiliar domain names or extensions like .tk, .cc, .xyz, etc. - -- Check the reviews and ratings of the website on Google or other platforms. Avoid websites that have poor or negative reviews or ratings from users. - -- Check the quality and authenticity of the PDF file. Avoid websites that offer low-quality or incomplete PDF files that do not match the original book. - -- Check the security and privacy policy of the website. Avoid websites that do not have a secure connection (https) or a clear privacy policy that protects your personal information. - - - -## Step 3: Download RD Sharma class 9 maths book PDF from a reliable and legal source - - - -Once you have found a reliable and legal source to download RD Sharma class 9 maths book PDF for free, you can proceed to download it on your device. Here are some steps to follow: - - - -1. Click on the download link or button on the website. - -2. Select the destination folder or location where you want to save the PDF file on your device. - -3. Wait for the download to complete. It may take a few minutes depending on your internet speed and file size. - -4. Open the PDF file using a PDF reader application like Adobe Acrobat Reader or Google Chrome. - -5. Enjoy reading and learning maths with RD Sharma class 9 maths book. - - - -We hope this article helped you download RD Sharma class 9 maths book PDF for free without any hassle. If you have any questions or suggestions, feel free to leave a comment below. Happy learning! - - 1b8d091108 - - - - - diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Field S Virology Pdf Download.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Field S Virology Pdf Download.md deleted file mode 100644 index 4e11db38e78ac29b991d05f980dfd9cb5582046a..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Field S Virology Pdf Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Field S Virology Pdf Download


      Download Zip ––– https://urlgoal.com/2uCKJ5



      - -Fields Virology, 6th Ed [PDF][Tahir99] VRG - Free e-book for download as PDF file (.pdf), text file (.txt) or read the book online is free. Fields Virology, 6th ed. Language: English Translations: English Author: Thomson MN Publisher: Wiley-Blackwell ISBN: 978-1-119-34970-6 Year: 2008 Format: PDF Quality: eBook (originally computer generated) Pages: 480 Language: English Fields Virology , 6th ed. Author: Tomson MN, Publisher: Wiley-Blackwell, Year: 2008. Language: English. ISBN: 978-1-119-34970-6. Number of pages: 480. Format: PDF. Quality: eBook (originally computer). Fields Virology, 6th Ed. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/rinong/StyleGAN-NADA/e4e/models/stylegan2/op/upfirdn2d.cpp b/spaces/rinong/StyleGAN-NADA/e4e/models/stylegan2/op/upfirdn2d.cpp deleted file mode 100644 index d2e633dc896433c205e18bc3e455539192ff968e..0000000000000000000000000000000000000000 --- a/spaces/rinong/StyleGAN-NADA/e4e/models/stylegan2/op/upfirdn2d.cpp +++ /dev/null @@ -1,23 +0,0 @@ -#include - - -torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1) { - CHECK_CUDA(input); - CHECK_CUDA(kernel); - - return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)"); -} \ No newline at end of file diff --git a/spaces/rlancemartin/auto-evaluator/README.md b/spaces/rlancemartin/auto-evaluator/README.md deleted file mode 100644 index 7e9fd922038629e050182b1d984c4fecba254526..0000000000000000000000000000000000000000 --- a/spaces/rlancemartin/auto-evaluator/README.md +++ /dev/null @@ -1,73 +0,0 @@ ---- -title: Auto Evaluator -emoji: :brain -colorFrom: blue -colorTo: yellow -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false -license: mit ---- - -# `Auto-evaluator` :brain: :memo: - -This is a lightweight evaluation tool for question-answering using `Langchain` to: - -- Ask the user to input a set of documents of interest - -- Apply an LLM (`GPT-3.5-turbo`) to auto-generate `question`-`answer` pairs from these docs - -- Generate a question-answering chain with a specified set of UI-chosen configurations - -- Use the chain to generate a response to each `question` - -- Use an LLM (`GPT-3.5-turbo`) to score the response relative to the `answer` - -- Explore scoring across various chain configurations - -**Run as Streamlit app** - -`pip install -r requirements.txt` - -`streamlit run auto-evaluator.py` - -**Inputs** - -`num_eval_questions` - Number of questions to auto-generate (if the user does not supply an eval set) - -`split_method` - Method for text splitting - -`chunk_chars` - Chunk size for text splitting - -`overlap` - Chunk overlap for text splitting - -`embeddings` - Embedding method for chunks - -`retriever_type` - Chunk retrieval method - -`num_neighbors` - Neighbors for retrieval - -`model` - LLM for summarization of retrieved chunks - -`grade_prompt` - Prompt choice for model self-grading - -**Blog** - -https://blog.langchain.dev/auto-eval-of-question-answering-tasks/ - -**UI** - -![image](https://user-images.githubusercontent.com/122662504/233218347-de10cf41-6230-47a7-aa9e-8ab01673b87a.png) - -**Hosted app** - -See: -https://github.com/langchain-ai/auto-evaluator - -And: -https://autoevaluator.langchain.com/ - -**Disclaimer** - -```You will need an OpenAI API key with access to `GPT-4` and an Anthropic API key to take advantage of all of the default dashboard model settings. However, additional models (e.g., from Hugging Face) can be easily added to the app.``` \ No newline at end of file diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/assigners/uniform_assigner.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/assigners/uniform_assigner.py deleted file mode 100644 index 70294fc45f32b2611c6c1521de14f57e4ec446f0..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/core/bbox/assigners/uniform_assigner.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from ..transforms import bbox_xyxy_to_cxcywh -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class UniformAssigner(BaseAssigner): - """Uniform Matching between the anchors and gt boxes, which can achieve - balance in positive anchors, and gt_bboxes_ignore was not considered for - now. - - Args: - pos_ignore_thr (float): the threshold to ignore positive anchors - neg_ignore_thr (float): the threshold to ignore negative anchors - match_times(int): Number of positive anchors for each gt box. - Default 4. - iou_calculator (dict): iou_calculator config - """ - - def __init__(self, - pos_ignore_thr, - neg_ignore_thr, - match_times=4, - iou_calculator=dict(type='BboxOverlaps2D')): - self.match_times = match_times - self.pos_ignore_thr = pos_ignore_thr - self.neg_ignore_thr = neg_ignore_thr - self.iou_calculator = build_iou_calculator(iou_calculator) - - def assign(self, - bbox_pred, - anchor, - gt_bboxes, - gt_bboxes_ignore=None, - gt_labels=None): - num_gts, num_bboxes = gt_bboxes.size(0), bbox_pred.size(0) - - # 1. assign -1 by default - assigned_gt_inds = bbox_pred.new_full((num_bboxes, ), - 0, - dtype=torch.long) - assigned_labels = bbox_pred.new_full((num_bboxes, ), - -1, - dtype=torch.long) - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - if num_gts == 0: - # No ground truth, assign all to background - assigned_gt_inds[:] = 0 - assign_result = AssignResult( - num_gts, assigned_gt_inds, None, labels=assigned_labels) - assign_result.set_extra_property( - 'pos_idx', bbox_pred.new_empty(0, dtype=torch.bool)) - assign_result.set_extra_property('pos_predicted_boxes', - bbox_pred.new_empty((0, 4))) - assign_result.set_extra_property('target_boxes', - bbox_pred.new_empty((0, 4))) - return assign_result - - # 2. Compute the L1 cost between boxes - # Note that we use anchors and predict boxes both - cost_bbox = torch.cdist( - bbox_xyxy_to_cxcywh(bbox_pred), - bbox_xyxy_to_cxcywh(gt_bboxes), - p=1) - cost_bbox_anchors = torch.cdist( - bbox_xyxy_to_cxcywh(anchor), bbox_xyxy_to_cxcywh(gt_bboxes), p=1) - - # We found that topk function has different results in cpu and - # cuda mode. In order to ensure consistency with the source code, - # we also use cpu mode. - # TODO: Check whether the performance of cpu and cuda are the same. - C = cost_bbox.cpu() - C1 = cost_bbox_anchors.cpu() - - # self.match_times x n - index = torch.topk( - C, # c=b,n,x c[i]=n,x - k=self.match_times, - dim=0, - largest=False)[1] - - # self.match_times x n - index1 = torch.topk(C1, k=self.match_times, dim=0, largest=False)[1] - # (self.match_times*2) x n - indexes = torch.cat((index, index1), - dim=1).reshape(-1).to(bbox_pred.device) - - pred_overlaps = self.iou_calculator(bbox_pred, gt_bboxes) - anchor_overlaps = self.iou_calculator(anchor, gt_bboxes) - pred_max_overlaps, _ = pred_overlaps.max(dim=1) - anchor_max_overlaps, _ = anchor_overlaps.max(dim=0) - - # 3. Compute the ignore indexes use gt_bboxes and predict boxes - ignore_idx = pred_max_overlaps > self.neg_ignore_thr - assigned_gt_inds[ignore_idx] = -1 - - # 4. Compute the ignore indexes of positive sample use anchors - # and predict boxes - pos_gt_index = torch.arange( - 0, C1.size(1), - device=bbox_pred.device).repeat(self.match_times * 2) - pos_ious = anchor_overlaps[indexes, pos_gt_index] - pos_ignore_idx = pos_ious < self.pos_ignore_thr - - pos_gt_index_with_ignore = pos_gt_index + 1 - pos_gt_index_with_ignore[pos_ignore_idx] = -1 - assigned_gt_inds[indexes] = pos_gt_index_with_ignore - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - - assign_result = AssignResult( - num_gts, - assigned_gt_inds, - anchor_max_overlaps, - labels=assigned_labels) - assign_result.set_extra_property('pos_idx', ~pos_ignore_idx) - assign_result.set_extra_property('pos_predicted_boxes', - bbox_pred[indexes]) - assign_result.set_extra_property('target_boxes', - gt_bboxes[pos_gt_index]) - return assign_result diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/lad.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/lad.py deleted file mode 100644 index c6cc1e0b2d9fd91dabc606da5192522e908ccebf..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/lad.py +++ /dev/null @@ -1,92 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn -from mmcv.runner import load_checkpoint - -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .kd_one_stage import KnowledgeDistillationSingleStageDetector - - -@DETECTORS.register_module() -class LAD(KnowledgeDistillationSingleStageDetector): - """Implementation of `LAD `_.""" - - def __init__(self, - backbone, - neck, - bbox_head, - teacher_backbone, - teacher_neck, - teacher_bbox_head, - teacher_ckpt, - eval_teacher=True, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(KnowledgeDistillationSingleStageDetector, - self).__init__(backbone, neck, bbox_head, train_cfg, test_cfg, - pretrained) - self.eval_teacher = eval_teacher - self.teacher_model = nn.Module() - self.teacher_model.backbone = build_backbone(teacher_backbone) - if teacher_neck is not None: - self.teacher_model.neck = build_neck(teacher_neck) - teacher_bbox_head.update(train_cfg=train_cfg) - teacher_bbox_head.update(test_cfg=test_cfg) - self.teacher_model.bbox_head = build_head(teacher_bbox_head) - if teacher_ckpt is not None: - load_checkpoint( - self.teacher_model, teacher_ckpt, map_location='cpu') - - @property - def with_teacher_neck(self): - """bool: whether the detector has a teacher_neck""" - return hasattr(self.teacher_model, 'neck') and \ - self.teacher_model.neck is not None - - def extract_teacher_feat(self, img): - """Directly extract teacher features from the backbone+neck.""" - x = self.teacher_model.backbone(img) - if self.with_teacher_neck: - x = self.teacher_model.neck(x) - return x - - def forward_train(self, - img, - img_metas, - gt_bboxes, - gt_labels, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): Class indices corresponding to each box - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - # get label assignment from the teacher - with torch.no_grad(): - x_teacher = self.extract_teacher_feat(img) - outs_teacher = self.teacher_model.bbox_head(x_teacher) - label_assignment_results = \ - self.teacher_model.bbox_head.get_label_assignment( - *outs_teacher, gt_bboxes, gt_labels, img_metas, - gt_bboxes_ignore) - - # the student use the label assignment from the teacher to learn - x = self.extract_feat(img) - losses = self.bbox_head.forward_train(x, label_assignment_results, - img_metas, gt_bboxes, gt_labels, - gt_bboxes_ignore) - return losses diff --git a/spaces/ronvolutional/ai-pokemon-card/static/js/index.js b/spaces/ronvolutional/ai-pokemon-card/static/js/index.js deleted file mode 100644 index 16aff046ea6402e381cf7483fcbba08c41921fbe..0000000000000000000000000000000000000000 --- a/spaces/ronvolutional/ai-pokemon-card/static/js/index.js +++ /dev/null @@ -1,117 +0,0 @@ -import { cardHTML } from './card-html.js'; -import { updateCardName, initialiseCardRotation, setOutput, screenshotCard } from './dom-manipulation.js'; - -const nameInput = document.querySelector('input[name="name"'); -const nameToggle = document.querySelector('button.toggle-name'); - -let pokeName; -let trainerName; -let useTrainerName = true; -let generating = false; -let mousemoveHandlerForPreviousCard; -let pulls = 0; -let saved = 0; - -const generate = async () => { - if (generating) { - return; - } - - const scene = document.querySelector('.scene'); - const cardSlot = scene.querySelector('.card-slot'); - const actions = document.querySelector('.actions'); - - scene.removeEventListener('mousemove', mousemoveHandlerForPreviousCard, true); - cardSlot.innerHTML = ''; - generating = true; - document.querySelector('.scene .booster').removeAttribute('title'); - setOutput('booster', 'generating'); - - try { - actions.style.opacity = '1'; - actions.setAttribute('aria-hidden', 'false'); - actions.querySelectorAll('button').forEach((button) => button.setAttribute('tabindex', '0')); - - if (window.innerWidth <= 920) { - scene.scrollIntoView({ behavior: 'smooth', block: 'end' }); - } - - await new Promise((resolve) => setTimeout(resolve, 2_000)); - - pulls += 1; - - const cardResponse = await fetch(`new_card?pull=${pulls}&saved=${saved}`); - const card = await cardResponse.json(); - - pokeName = card.details.name; - - generating = false; - - setOutput('booster', 'completed'); - - await new Promise((resolve) => - setTimeout(resolve, window.matchMedia('(prefers-reduced-motion: reduce)').matches ? 1_500 : 1_000) - ); - - cardSlot.innerHTML = cardHTML(card.details); - document.querySelector('img.picture').src = card.image; - - mousemoveHandlerForPreviousCard = initialiseCardRotation(scene); - - setOutput('card', 'completed'); - - const updateNameDuringAnimation = setInterval(() => updateCardName(trainerName, pokeName, useTrainerName), 100); - - setTimeout(() => { - clearInterval(updateNameDuringAnimation); - }, 500); - } catch (err) { - generating = false; - setOutput('booster', 'failed'); - console.error(err); - } -}; - -nameInput.addEventListener('input', (e) => { - trainerName = [...e.target.value].filter((char) => char.match(/[\wÀ-ÿ '".,@&+#!?:/\\()_-]/g)?.length).join(''); - - nameInput.value = trainerName; - - updateCardName(trainerName, pokeName, useTrainerName); -}); - -document.querySelector('form.name-form').addEventListener('submit', (e) => { - e.preventDefault(); - - if (document.querySelector('.output').dataset.state === 'completed') { - if (!window.confirm('Generate new Pokémon?')) { - return; - } - } - - generate(); -}); - -nameToggle.addEventListener('click', () => { - useTrainerName = !useTrainerName; - - updateCardName(trainerName, pokeName, useTrainerName); - - if (!useTrainerName) { - nameToggle.classList.add('off'); - } else { - nameToggle.classList.remove('off'); - } -}); - -document.querySelector('.booster').addEventListener('click', generate); - -document.querySelector('button.generate-new').addEventListener('click', generate); - -document.querySelector('button.save').addEventListener('click', async () => { - const a = document.createElement('a'); - a.href = await screenshotCard(); - a.download = `${updateCardName(trainerName, pokeName, useTrainerName)} - This Pokémon Does Not Exist.png`; - a.click(); - saved += 1; -}); diff --git a/spaces/rorallitri/biomedical-language-models/logs/Kisi Kisi Soal Fiqih Ma Kelas X Semester 1 [2021].md b/spaces/rorallitri/biomedical-language-models/logs/Kisi Kisi Soal Fiqih Ma Kelas X Semester 1 [2021].md deleted file mode 100644 index 08351887673d84bb874a194401c8e6e0472e7ed9..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Kisi Kisi Soal Fiqih Ma Kelas X Semester 1 [2021].md +++ /dev/null @@ -1,6 +0,0 @@ -

      kisi kisi soal fiqih ma kelas x semester 1


      Download »»» https://tinurll.com/2uzmeX



      -
      - 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/runa91/bite_gradio/src/combined_model/helper3.py b/spaces/runa91/bite_gradio/src/combined_model/helper3.py deleted file mode 100644 index 7230caa0e0da72afd8b2271327bec453fe9f8f7b..0000000000000000000000000000000000000000 --- a/spaces/runa91/bite_gradio/src/combined_model/helper3.py +++ /dev/null @@ -1,17 +0,0 @@ - -import numpy as np - -def get_triangle_faces_from_pyvista_poly(poly): - """Fetch all triangle faces.""" - stream = poly.faces - tris = [] - i = 0 - while i < len(stream): - n = stream[i] - if n != 3: - i += n + 1 - continue - stop = i + n + 1 - tris.append(stream[i+1:stop]) - i = stop - return np.array(tris) \ No newline at end of file diff --git a/spaces/samcaicn/bingai/src/lib/bots/bing/types.ts b/spaces/samcaicn/bingai/src/lib/bots/bing/types.ts deleted file mode 100644 index c2ca255b5919982c21fbe2081e2b5a3c593cb74b..0000000000000000000000000000000000000000 --- a/spaces/samcaicn/bingai/src/lib/bots/bing/types.ts +++ /dev/null @@ -1,212 +0,0 @@ -export type Author = 'user' | 'system' | 'bot' - -export type BotId = 'bing' - -export enum BingConversationStyle { - Creative = 'Creative', - Balanced = 'Balanced', - Precise = 'Precise' -} - -export enum ErrorCode { - CONVERSATION_LIMIT = 'CONVERSATION_LIMIT', - BING_UNAUTHORIZED = 'BING_UNAUTHORIZED', - BING_FORBIDDEN = 'BING_FORBIDDEN', - BING_CAPTCHA = 'BING_CAPTCHA', - NOTFOUND_ERROR = 'NOT_FOUND_ERROR', - UNKOWN_ERROR = 'UNKOWN_ERROR', - NETWORK_ERROR = 'NETWORK_ERROR', -} - -export class ChatError extends Error { - code: ErrorCode - constructor(message: string, code: ErrorCode) { - super(message) - this.code = code - } -} - -export type ChatMessageModel = { - id: string - author: Author - text: string - error?: ChatError - throttling?: Throttling - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] -} - -export interface ConversationModel { - messages: ChatMessageModel[] -} - -export type Event = - | { - type: 'UPDATE_ANSWER' - data: { - text: string - sourceAttributions?: SourceAttribution[] - suggestedResponses?: SuggestedResponse[] - throttling?: Throttling - } - } - | { - type: 'DONE' - } - | { - type: 'ERROR' - error: ChatError - } - -export interface SendMessageParams { - prompt: string - options: T - onEvent: (event: Event) => void - signal?: AbortSignal -} - -export interface ConversationResponse { - conversationId: string - clientId: string - conversationSignature: string - result: { - value: string - message?: string - } -} - -export interface Telemetry { - metrics?: null - startTime: string -} - -export interface ChatUpdateArgument { - messages?: ChatResponseMessage[] - throttling?: Throttling - requestId: string - result: null -} - -export type ChatUpdateCompleteResponse = { - type: 2 - invocationId: string - item: ChatResponseItem -} | { - type: 1 - target: string - arguments: ChatUpdateArgument[] -} | { - type: 3 - invocationId: string -} - -export interface ChatRequestResult { - value: string - serviceVersion: string - error?: string -} - -export interface ChatResponseItem { - messages: ChatResponseMessage[] - firstNewMessageIndex: number - suggestedResponses: null - conversationId: string - requestId: string - conversationExpiryTime: string - telemetry: Telemetry - result: ChatRequestResult - throttling: Throttling -} -export enum InvocationEventType { - Invocation = 1, - StreamItem = 2, - Completion = 3, - StreamInvocation = 4, - CancelInvocation = 5, - Ping = 6, - Close = 7, -} - -// https://github.com/bytemate/bingchat-api/blob/main/src/lib.ts - -export interface ConversationInfo { - conversationId: string - clientId: string - conversationSignature: string - invocationId: number - conversationStyle: BingConversationStyle - prompt: string -} - -export interface BingChatResponse { - conversationSignature: string - conversationId: string - clientId: string - invocationId: number - conversationExpiryTime: Date - response: string - details: ChatResponseMessage -} - -export interface Throttling { - maxNumLongDocSummaryUserMessagesInConversation: number - maxNumUserMessagesInConversation: number - numLongDocSummaryUserMessagesInConversation: number - numUserMessagesInConversation: number -} - -export interface ChatResponseMessage { - text: string - author: string - createdAt: Date - timestamp: Date - messageId: string - requestId: string - offense: string - adaptiveCards: AdaptiveCard[] - sourceAttributions: SourceAttribution[] - feedback: Feedback - contentOrigin: string - messageType?: string - contentType?: string - privacy: null - suggestedResponses: SuggestedResponse[] -} - -export interface AdaptiveCard { - type: string - version: string - body: Body[] -} - -export interface Body { - type: string - text: string - wrap: boolean - size?: string -} - -export interface Feedback { - tag: null - updatedOn: null - type: string -} - -export interface SourceAttribution { - providerDisplayName: string - seeMoreUrl: string - searchQuery: string -} - -export interface SuggestedResponse { - text: string - author?: Author - createdAt?: Date - timestamp?: Date - messageId?: string - messageType?: string - offense?: string - feedback?: Feedback - contentOrigin?: string - privacy?: null -} diff --git a/spaces/samehmamin/argillatest/README.md b/spaces/samehmamin/argillatest/README.md deleted file mode 100644 index e1f0b0bbf84a12842f95243dcc1016c18ab0fa48..0000000000000000000000000000000000000000 --- a/spaces/samehmamin/argillatest/README.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: Argilla Space Template -emoji: 🏷️ -colorFrom: purple -colorTo: red -sdk: docker -app_port: 6900 -fullWidth: true -tags: -- argilla -duplicated_from: argilla/argilla-template-space ---- - -This is the Argilla Space Template you can use to deploy and run your own instance of Argilla on the Hugging Face Hub, for labeling, fun, and active learning loops! - -Login with: - -user: argilla -password: 1234 \ No newline at end of file diff --git a/spaces/santoshsindham/VideoSummary/README.md b/spaces/santoshsindham/VideoSummary/README.md deleted file mode 100644 index 04d457c6e1d24830a488c8d0ba04ccb128e4b994..0000000000000000000000000000000000000000 --- a/spaces/santoshsindham/VideoSummary/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: VideoSummary -emoji: 👁 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.1.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sasha/BiasDetection/toxicity_prompt.py b/spaces/sasha/BiasDetection/toxicity_prompt.py deleted file mode 100644 index 35a74454ed49fa9abd4f259b369e1f87418da4c8..0000000000000000000000000000000000000000 --- a/spaces/sasha/BiasDetection/toxicity_prompt.py +++ /dev/null @@ -1,59 +0,0 @@ -# -*- coding: utf-8 -*- - -import torch -import pandas as pd -import math -from pathlib import Path -import json -import pickle -from datasets import load_dataset -from tqdm.auto import tqdm -from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline, set_seed -from transformers import AutoTokenizer, AutoModelForCausalLM - -device = "cuda:0" if torch.cuda.is_available() else "cpu" - -hatetokenizer = AutoTokenizer.from_pretrained("facebook/roberta-hate-speech-dynabench-r4-target") -hatemodel = AutoModelForSequenceClassification.from_pretrained("facebook/roberta-hate-speech-dynabench-r4-target") - - -tokenizer = AutoTokenizer.from_pretrained(modelname, left_padding=True) -model = AutoModelForCausalLM.from_pretrained("bigscience/bloom-1b3", torch_dtype = "auto").to("cuda") -tokenizer.pad_token = tokenizer.eos_token - -classifier = pipeline('text-classification', model=hatemodel,tokenizer=hatetokenizer, top_k=2, truncation=True) - -prompts = load_dataset("sasha/real_toxicity_prompts", split="prompts") -df_prompt = pd.DataFrame(prompts) -continuations = load_dataset("sasha/real_toxicity_continuations", split="continuations") -df_continuations = pd.DataFrame(continuations) -all_toxic_df = pd.concat([df_prompt, df_continuations], axis=1) - -def gen_toxic_prompt(data, batchsize = 32, max_gen_len = 50): - for i in tqdm(range(0, len(data), batchsize)): - batch = data[i: i+batchsize] - inputs = tokenizer(batch, return_tensors="pt", padding=True) - - output_sequences = model.generate( - input_ids=inputs['input_ids'].to(model.device), - attention_mask=inputs['attention_mask'].to(model.device), - do_sample=False, # disable sampling to test if batching affects output - pad_token_id=tokenizer.eos_token_id, - max_length=len(inputs['input_ids'][0]) + max_gen_len, # let it generate longer - ) - outputs = [tokenizer.decode(x) for x in output_sequences] - -def remove_prompt(prompt, generation): - return(str(generation).replace(str(prompt),'')) - -def hate_classification(gen): - score = classifier(gen) - score = sorted(score, key=lambda x: x['label']) - hatescore = score[0]['score'] - no_hatescore= score[1]['score'] - return(hatescore,no_hatescore) - - -all_toxic_df[['gen_text','hatescore']] = all_toxic_df.apply(lambda row: gen_toxic_prompt(row['prompt']), axis=1, result_type="expand") - -all_neo_df[['ghatescore','nohatescore']] = all_neo_df.apply(lambda row: hate_classification(row['gpt_neo_continuation']), axis=1, result_type="expand") diff --git a/spaces/scedlatioru/img-to-music/example/Journey 2 The Mysterious Island 2012 1080pDual AudioEnglish51 HINDI2chPHDR ((BETTER)).md b/spaces/scedlatioru/img-to-music/example/Journey 2 The Mysterious Island 2012 1080pDual AudioEnglish51 HINDI2chPHDR ((BETTER)).md deleted file mode 100644 index 6580284c1866f4fe7904523479d277323d5a2d7e..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Journey 2 The Mysterious Island 2012 1080pDual AudioEnglish51 HINDI2chPHDR ((BETTER)).md +++ /dev/null @@ -1,41 +0,0 @@ -
      -

      Journey 2: The Mysterious Island (2012) - A Review

      -

      Journey 2: The Mysterious Island is a sequel to the 2008 film Journey to the Center of the Earth, based on the novels by Jules Verne. It stars Josh Hutcherson, Dwayne Johnson, Vanessa Hudgens, Michael Caine, and Luis Guzmán. The film follows Sean Anderson (Hutcherson), who receives a coded message from his grandfather Alexander (Caine), who claims to have found the mysterious island from Verne's books. Sean teams up with his stepfather Hank (Johnson), a helicopter pilot Gabato (Guzmán), and his daughter Kailani (Hudgens) to embark on a thrilling adventure to find Alexander and the secrets of the island.

      -

      If you are looking for a fun and family-friendly movie with stunning visuals, action-packed scenes, and humorous moments, then Journey 2: The Mysterious Island is a good choice. The film delivers on its promise of taking the viewers to a fantastical world full of wonders and dangers, such as giant bees, lizards, birds, and volcanoes. The CGI effects are impressive and realistic, creating a vivid and immersive experience. The film also has a good balance of comedy and drama, with Johnson and Guzmán providing most of the laughs with their witty lines and antics. The chemistry between the actors is also believable and charming, especially between Hutcherson and Hudgens, who play the romantic interests.

      -

      Journey 2 The Mysterious Island 2012 1080pDual AudioEnglish51 HINDI2chPHDR


      Download Ziphttps://gohhs.com/2uEzp0



      -

      However, if you are looking for a deep and meaningful story with complex characters and themes, then Journey 2: The Mysterious Island might disappoint you. The film is not very original or innovative in its plot or message, as it follows the typical formula of a quest adventure with some twists and turns along the way. The characters are also not very well-developed or memorable, as they mostly serve as stereotypes or archetypes. The film does not explore much of the potential of the island or its inhabitants, nor does it address any of the moral or ethical issues that might arise from such a discovery. The film is more focused on entertaining the audience with spectacle and spectacle alone.

      -

      Conclusion

      -

      Journey 2: The Mysterious Island is a movie that can be enjoyed by anyone who likes fantasy, adventure, and comedy genres. It is a visually stunning and entertaining film that will keep you on the edge of your seat and make you laugh along the way. However, it is not a movie that will challenge you intellectually or emotionally, as it lacks originality and depth in its story and characters. It is a movie that you can watch once and forget about it afterwards.

      -

      If you want to watch Journey 2: The Mysterious Island in high quality and dual audio (English and Hindi), then you can download it from this link: Journey 2 The Mysterious Island 2012 1080pDual AudioEnglish51 HINDI2chPHDR. This link will provide you with a torrent file that you can use to download the movie using a torrent client. Make sure you have a VPN service to protect your privacy and security when downloading torrents.

      -

      What is Journey 2: The Mysterious Island about?

      -

      Journey 2: The Mysterious Island is a movie that follows the adventures of Sean Anderson, a young Vernian who is obsessed with finding the mysterious island from Jules Verne's novels. He receives a coded message from his grandfather Alexander, who claims to have discovered the island and its secrets. Sean convinces his stepfather Hank to join him on a trip to Palau, where they hire a helicopter pilot Gabato and his daughter Kailani to fly them to the island's coordinates. However, they encounter a storm that forces them to crash-land on the island, where they meet Alexander and learn about its wonders and dangers. They also discover that the island is sinking due to volcanic activity, and they must find a way to escape before it's too late.

      - -

      Why should you watch Journey 2: The Mysterious Island?

      -

      Journey 2: The Mysterious Island is a movie that will appeal to anyone who loves fantasy, adventure, and comedy genres. It is a movie that will take you to a magical world full of amazing creatures and landscapes, such as giant bees, lizards, birds, and volcanoes. It is a movie that will keep you entertained with its fast-paced action, stunning visuals, and humorous moments. It is a movie that will make you feel like a kid again, as you explore the mysteries of the island with the characters. It is a movie that will inspire you to follow your dreams and passions, as Sean does with his love for Verne's books.

      - -

      How can you watch Journey 2: The Mysterious Island in high quality and dual audio?

      -

      If you want to watch Journey 2: The Mysterious Island in high quality and dual audio (English and Hindi), then you can download it from this link: Journey 2 The Mysterious Island 2012 1080pDual AudioEnglish51 HINDI2chPHDR. This link will provide you with a torrent file that you can use to download the movie using a torrent client. Make sure you have a VPN service to protect your privacy and security when downloading torrents.

      -

      Who are the cast and crew of Journey 2: The Mysterious Island?

      -

      Journey 2: The Mysterious Island is a movie that features a talented and diverse cast and crew. The movie is directed by Brad Peyton, who also directed San Andreas and Rampage, starring Dwayne Johnson. The movie is produced by Beau Flynn, Tripp Vinson, and Charlotte Huggins, who also produced Journey to the Center of the Earth. The movie is written by Brian Gunn and Mark Gunn, who also wrote Bring It On Again and Brightburn. The movie is based on the novels by Jules Verne, who is considered one of the fathers of science fiction.

      -

      -

      The movie stars Josh Hutcherson as Sean Anderson, a young Vernian who is determined to find the mysterious island. Hutcherson is best known for his roles in The Hunger Games series, Bridge to Terabithia, and Zathura. The movie also stars Dwayne Johnson as Hank Parsons, Sean's stepfather and a former Navy officer. Johnson is one of the most popular and highest-paid actors in Hollywood, with roles in Fast & Furious, Jumanji, Moana, and more. The movie also features Vanessa Hudgens as Kailani Laguatan, Gabato's daughter and Sean's love interest. Hudgens rose to fame with her role in High School Musical, and has also starred in Spring Breakers, Grease Live!, and The Princess Switch. The movie also has Michael Caine as Alexander Anderson, Sean's grandfather and a legendary Vernian. Caine is one of the most respected and acclaimed actors in the industry, with roles in The Dark Knight trilogy, Inception, The Prestige, and more. The movie also has Luis Guzmán as Gabato Laguatan, a helicopter pilot and a comic relief. Guzmán is a veteran actor who has appeared in Boogie Nights, Traffic, Narcos, and more.

      - -

      What are some of the reviews of Journey 2: The Mysterious Island?

      -

      Journey 2: The Mysterious Island is a movie that has received mixed reviews from critics and audiences alike. The movie has a rating of 45% on Rotten Tomatoes, based on 131 reviews, with an average score of 5/10. The site's consensus reads: "Aggressively unambitious, Journey 2 might thrill teen viewers, but most others will find it too intense for young audiences and too cartoonishly dull for adults." The movie also has a rating of 59% on Metacritic, based on 27 reviews, indicating "mixed or average reviews".

      -

      However, the movie has also received some positive feedback from some reviewers and viewers. Some praised the movie for its visual effects, action sequences, humor, and cast performances. Some also appreciated the movie for its homage to Verne's novels and its family-friendly appeal. Some of the positive reviews are:

      -
        -
      • "Journey 2: The Mysterious Island is silly but fun family entertainment." - Richard Roeper
      • -
      • "Journey 2: The Mysterious Island is a cheerful and good-looking family film." - Roger Ebert
      • -
      • "Journey 2: The Mysterious Island is a fun-filled adventure that will keep you entertained from start to finish." - Common Sense Media
      • -
      -

      What are some of the themes and messages of Journey 2: The Mysterious Island?

      -

      Journey 2: The Mysterious Island is a movie that explores some of the themes and messages that are common in Jules Verne's novels, such as adventure, exploration, discovery, and imagination. The movie shows how Sean is inspired by Verne's stories and follows his passion for finding the mysterious island. The movie also shows how Hank and Alexander are able to bond with Sean over their shared love for adventure and curiosity. The movie also celebrates the power of imagination and creativity, as the characters encounter a world that is full of wonders and surprises, such as giant bees, lizards, birds, and volcanoes. The movie also encourages the viewers to follow their dreams and passions, as Sean does with his love for Verne's books.

      - -

      What are some of the challenges and difficulties of making Journey 2: The Mysterious Island?

      -

      Journey 2: The Mysterious Island is a movie that faced some challenges and difficulties in its production and development. One of the challenges was to find a suitable director for the sequel, as Eric Brevig, who directed the first film, was unavailable due to scheduling conflicts. The producers eventually hired Brad Peyton, who impressed them with his vision and enthusiasm for the project. Another challenge was to find a suitable location for filming, as the island had to look exotic and realistic. The producers chose Hawaii as the main location, as it offered a variety of landscapes and environments that matched the island's description. However, filming in Hawaii also posed some difficulties, such as weather conditions, permits, logistics, and safety issues. Another challenge was to create the visual effects for the movie, as the island had to feature many fantastical creatures and elements that required CGI. The producers hired several visual effects companies to work on different aspects of the movie, such as Digital Domain, Rhythm & Hues Studios, MPC, Method Studios, Prime Focus World, Scanline VFX, Rising Sun Pictures, Iloura, Image Engine Design Inc., Pixomondo , Lola VFX , Hydraulx , Tippett Studio , Proof , Halon Entertainment , Gentle Giant Studios , Weta Digital , Third Floor Inc., Stereo D , Legend3D , Gener8 , Venture 3D , Reliance MediaWorks , Pixel Playground Inc., Crazy Horse Effects , Cantina Creative , Identity FX Inc., Ollin Studio , Atomic Fiction , Whiskytree Inc., yU+co . However, creating the visual effects also involved some challenges, such as time constraints, budget limitations, technical issues, and artistic decisions.

      -

      Final verdict

      -

      Journey 2: The Mysterious Island is a movie that can be enjoyed by anyone who likes fantasy, adventure, and comedy genres. It is a visually stunning and entertaining film that will take you to a magical world full of wonders and dangers. It is a movie that will keep you entertained with its fast-paced action, stunning visuals, and humorous moments. It is a movie that will make you feel like a kid again, as you explore the mysteries of the island with the characters. It is a movie that will inspire you to follow your dreams and passions, as Sean does with his love for Verne's books.

      -

      However, it is not a movie that will challenge you intellectually or emotionally, as it lacks originality and depth in its story and characters. It is a movie that follows the typical formula of a quest adventure with some twists and turns along the way. It is a movie that does not explore much of the potential of the island or its inhabitants, nor does it address any of the moral or ethical issues that might arise from such a discovery. It is a movie that you can watch once and forget about it afterwards.

      -

      If you want to watch Journey 2: The Mysterious Island in high quality and dual audio (English and Hindi), then you can download it from this link: Journey 2 The Mysterious Island 2012 1080pDual AudioEnglish51 HINDI2chPHDR. This link will provide you with a torrent file that you can use to download the movie using a torrent client. Make sure you have a VPN service to protect your privacy and security when downloading torrents.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Waves All Plugins Bundle V9r8 REPACK Full REPACK R2R Deepstatus133.md b/spaces/scedlatioru/img-to-music/example/Waves All Plugins Bundle V9r8 REPACK Full REPACK R2R Deepstatus133.md deleted file mode 100644 index 7da9f57b135b2481292fd235cda23d05306ebd86..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Waves All Plugins Bundle V9r8 REPACK Full REPACK R2R Deepstatus133.md +++ /dev/null @@ -1,21 +0,0 @@ -
      -

      Waves All Plugins Bundle v9r8 FULL REPACK - R2R [deepstatus][133]

      -

      Waves All Plugins Bundle v9r8 FULL REPACK - R2R [deepstatus][133] is a torrent file that contains a collection of audio plugins from Waves, a leading developer of professional audio software. Waves plugins are used by musicians, producers, engineers, and sound designers to enhance the sound quality and creativity of their audio projects. The bundle includes 64-bit support, faster scanning, loading, and processing, and a new easy-to-use activation system called Waves License Center[^1^].

      -

      The torrent file was uploaded by deepstatus, a verified uploader on The Pirate Bay, a popular website for sharing digital content. The file has a size of 1.09 GB and contains 133 files. The file was created by R2R, a group of hackers who crack software and bypass copy protection mechanisms[^3^]. The file has been downloaded by thousands of users who want to access Waves plugins for free or test them before buying them.

      -

      Waves All Plugins Bundle V9r8 FULL REPACK R2R Deepstatus133


      Download Ziphttps://gohhs.com/2uEA2f



      -

      However, downloading and using this torrent file may pose some risks and disadvantages. First of all, it is illegal to use software that you do not own or have a license for. You may face legal consequences if you are caught using pirated software. Second, the torrent file may contain viruses, malware, or other harmful programs that can damage your computer or steal your personal information. Third, the torrent file may not work properly or be compatible with your system or other software. You may experience crashes, errors, or glitches when using the plugins. Fourth, you may miss out on updates, support, and new features that Waves offers to its legitimate customers. You may also lose access to the plugins if Waves detects that you are using a cracked version.

      -

      Therefore, it is recommended that you do not download or use this torrent file. Instead, you should visit the official website of Waves[^2^] and check out their audio plugin bundles. They offer a variety of bundles for different needs and budgets, such as music production, mixing, mastering, live sound, post production, and more. You can also try out their plugins for free for 7 days with no commitment. You can benefit from their high-quality products, customer service, technical support, and lifetime updates.

      -

      If you are looking for a title and article with SEO optimization and HTML formatting for the keyword "Waves All Plugins Bundle v9r8 FULL REPACK - R2R [deepstatus][133]", you can use this article as an example. However, you should rewrite it in your own words and avoid plagiarism. You should also research more about the topic and provide accurate and relevant information.

      Here are some additional paragraphs for the article:

      -

      Waves plugins are designed to help you achieve professional results in your audio projects. Whether you are recording, mixing, mastering, or performing live, Waves plugins can enhance your sound quality, creativity, and workflow. Waves plugins are compatible with most popular DAWs (digital audio workstations) and operating systems. You can use them as standalone applications or as plugins within your DAW. You can also customize your plugin settings and save them as presets for future use.

      -

      Waves offers a wide range of audio plugins for different purposes and genres. Some of their most popular plugins include:

      -
        -
      • Waves Tune Real-Time: A pitch correction plugin that automatically tunes vocals in real time with minimal latency and natural sound.
      • -
      • Vocal Rider: A plugin that automatically adjusts the level of vocal tracks in relation to the rest of the mix.
      • -
      • Vocal Bender: A plugin that lets you manipulate the pitch and formant of vocal tracks in real time, creating effects such as gender switching, robot voice, and more.
      • -
      • CLA Vocals: A plugin that emulates the signature vocal sound of legendary producer and mixer Chris Lord-Alge, featuring compression, EQ, reverb, delay, and more.
      • -
      • CLA-2A Compressor / Limiter: A plugin that models the classic LA-2A optical compressor/limiter, known for its smooth and warm sound.
      • -
      -

      These are just some examples of the many plugins that Waves offers. You can explore their website to find more plugins that suit your needs and preferences. You can also watch tutorials, read reviews, and listen to demos to learn more about how to use Waves plugins effectively.

      -

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/sczhou/ProPainter/web-demos/hugging_face/tools/painter.py b/spaces/sczhou/ProPainter/web-demos/hugging_face/tools/painter.py deleted file mode 100644 index 0e711d35aa8348d15cdad9d1cd413da41ea4f1ab..0000000000000000000000000000000000000000 --- a/spaces/sczhou/ProPainter/web-demos/hugging_face/tools/painter.py +++ /dev/null @@ -1,215 +0,0 @@ -# paint masks, contours, or points on images, with specified colors -import cv2 -import torch -import numpy as np -from PIL import Image -import copy -import time - - -def colormap(rgb=True): - color_list = np.array( - [ - 0.000, 0.000, 0.000, - 1.000, 1.000, 1.000, - 1.000, 0.498, 0.313, - 0.392, 0.581, 0.929, - 0.000, 0.447, 0.741, - 0.850, 0.325, 0.098, - 0.929, 0.694, 0.125, - 0.494, 0.184, 0.556, - 0.466, 0.674, 0.188, - 0.301, 0.745, 0.933, - 0.635, 0.078, 0.184, - 0.300, 0.300, 0.300, - 0.600, 0.600, 0.600, - 1.000, 0.000, 0.000, - 1.000, 0.500, 0.000, - 0.749, 0.749, 0.000, - 0.000, 1.000, 0.000, - 0.000, 0.000, 1.000, - 0.667, 0.000, 1.000, - 0.333, 0.333, 0.000, - 0.333, 0.667, 0.000, - 0.333, 1.000, 0.000, - 0.667, 0.333, 0.000, - 0.667, 0.667, 0.000, - 0.667, 1.000, 0.000, - 1.000, 0.333, 0.000, - 1.000, 0.667, 0.000, - 1.000, 1.000, 0.000, - 0.000, 0.333, 0.500, - 0.000, 0.667, 0.500, - 0.000, 1.000, 0.500, - 0.333, 0.000, 0.500, - 0.333, 0.333, 0.500, - 0.333, 0.667, 0.500, - 0.333, 1.000, 0.500, - 0.667, 0.000, 0.500, - 0.667, 0.333, 0.500, - 0.667, 0.667, 0.500, - 0.667, 1.000, 0.500, - 1.000, 0.000, 0.500, - 1.000, 0.333, 0.500, - 1.000, 0.667, 0.500, - 1.000, 1.000, 0.500, - 0.000, 0.333, 1.000, - 0.000, 0.667, 1.000, - 0.000, 1.000, 1.000, - 0.333, 0.000, 1.000, - 0.333, 0.333, 1.000, - 0.333, 0.667, 1.000, - 0.333, 1.000, 1.000, - 0.667, 0.000, 1.000, - 0.667, 0.333, 1.000, - 0.667, 0.667, 1.000, - 0.667, 1.000, 1.000, - 1.000, 0.000, 1.000, - 1.000, 0.333, 1.000, - 1.000, 0.667, 1.000, - 0.167, 0.000, 0.000, - 0.333, 0.000, 0.000, - 0.500, 0.000, 0.000, - 0.667, 0.000, 0.000, - 0.833, 0.000, 0.000, - 1.000, 0.000, 0.000, - 0.000, 0.167, 0.000, - 0.000, 0.333, 0.000, - 0.000, 0.500, 0.000, - 0.000, 0.667, 0.000, - 0.000, 0.833, 0.000, - 0.000, 1.000, 0.000, - 0.000, 0.000, 0.167, - 0.000, 0.000, 0.333, - 0.000, 0.000, 0.500, - 0.000, 0.000, 0.667, - 0.000, 0.000, 0.833, - 0.000, 0.000, 1.000, - 0.143, 0.143, 0.143, - 0.286, 0.286, 0.286, - 0.429, 0.429, 0.429, - 0.571, 0.571, 0.571, - 0.714, 0.714, 0.714, - 0.857, 0.857, 0.857 - ] - ).astype(np.float32) - color_list = color_list.reshape((-1, 3)) * 255 - if not rgb: - color_list = color_list[:, ::-1] - return color_list - - -color_list = colormap() -color_list = color_list.astype('uint8').tolist() - - -def vis_add_mask(image, mask, color, alpha): - color = np.array(color_list[color]) - mask = mask > 0.5 - image[mask] = image[mask] * (1-alpha) + color * alpha - return image.astype('uint8') - -def point_painter(input_image, input_points, point_color=5, point_alpha=0.9, point_radius=15, contour_color=2, contour_width=5): - h, w = input_image.shape[:2] - point_mask = np.zeros((h, w)).astype('uint8') - for point in input_points: - point_mask[point[1], point[0]] = 1 - - kernel = cv2.getStructuringElement(2, (point_radius, point_radius)) - point_mask = cv2.dilate(point_mask, kernel) - - contour_radius = (contour_width - 1) // 2 - dist_transform_fore = cv2.distanceTransform(point_mask, cv2.DIST_L2, 3) - dist_transform_back = cv2.distanceTransform(1-point_mask, cv2.DIST_L2, 3) - dist_map = dist_transform_fore - dist_transform_back - # ...:::!!!:::... - contour_radius += 2 - contour_mask = np.abs(np.clip(dist_map, -contour_radius, contour_radius)) - contour_mask = contour_mask / np.max(contour_mask) - contour_mask[contour_mask>0.5] = 1. - - # paint mask - painted_image = vis_add_mask(input_image.copy(), point_mask, point_color, point_alpha) - # paint contour - painted_image = vis_add_mask(painted_image.copy(), 1-contour_mask, contour_color, 1) - return painted_image - -def mask_painter(input_image, input_mask, mask_color=5, mask_alpha=0.7, contour_color=1, contour_width=3): - assert input_image.shape[:2] == input_mask.shape, 'different shape between image and mask' - # 0: background, 1: foreground - mask = np.clip(input_mask, 0, 1) - contour_radius = (contour_width - 1) // 2 - - dist_transform_fore = cv2.distanceTransform(mask, cv2.DIST_L2, 3) - dist_transform_back = cv2.distanceTransform(1-mask, cv2.DIST_L2, 3) - dist_map = dist_transform_fore - dist_transform_back - # ...:::!!!:::... - contour_radius += 2 - contour_mask = np.abs(np.clip(dist_map, -contour_radius, contour_radius)) - contour_mask = contour_mask / np.max(contour_mask) - contour_mask[contour_mask>0.5] = 1. - - # paint mask - painted_image = vis_add_mask(input_image.copy(), mask.copy(), mask_color, mask_alpha) - # paint contour - painted_image = vis_add_mask(painted_image.copy(), 1-contour_mask, contour_color, 1) - - return painted_image - -def background_remover(input_image, input_mask): - """ - input_image: H, W, 3, np.array - input_mask: H, W, np.array - - image_wo_background: PIL.Image - """ - assert input_image.shape[:2] == input_mask.shape, 'different shape between image and mask' - # 0: background, 1: foreground - mask = np.expand_dims(np.clip(input_mask, 0, 1), axis=2)*255 - image_wo_background = np.concatenate([input_image, mask], axis=2) # H, W, 4 - image_wo_background = Image.fromarray(image_wo_background).convert('RGBA') - - return image_wo_background - -if __name__ == '__main__': - input_image = np.array(Image.open('images/painter_input_image.jpg').convert('RGB')) - input_mask = np.array(Image.open('images/painter_input_mask.jpg').convert('P')) - - # example of mask painter - mask_color = 3 - mask_alpha = 0.7 - contour_color = 1 - contour_width = 5 - - # save - painted_image = Image.fromarray(input_image) - painted_image.save('images/original.png') - - painted_image = mask_painter(input_image, input_mask, mask_color, mask_alpha, contour_color, contour_width) - # save - painted_image = Image.fromarray(input_image) - painted_image.save('images/original1.png') - - # example of point painter - input_image = np.array(Image.open('images/painter_input_image.jpg').convert('RGB')) - input_points = np.array([[500, 375], [70, 600]]) # x, y - point_color = 5 - point_alpha = 0.9 - point_radius = 15 - contour_color = 2 - contour_width = 5 - painted_image_1 = point_painter(input_image, input_points, point_color, point_alpha, point_radius, contour_color, contour_width) - # save - painted_image = Image.fromarray(painted_image_1) - painted_image.save('images/point_painter_1.png') - - input_image = np.array(Image.open('images/painter_input_image.jpg').convert('RGB')) - painted_image_2 = point_painter(input_image, input_points, point_color=9, point_radius=20, contour_color=29) - # save - painted_image = Image.fromarray(painted_image_2) - painted_image.save('images/point_painter_2.png') - - # example of background remover - input_image = np.array(Image.open('images/original.png').convert('RGB')) - image_wo_background = background_remover(input_image, input_mask) # return PIL.Image - image_wo_background.save('images/image_wo_background.png') diff --git a/spaces/seanpedrickcase/Light-PDF-Web-QA-Chatbot/README.md b/spaces/seanpedrickcase/Light-PDF-Web-QA-Chatbot/README.md deleted file mode 100644 index 92169bc7436e1acf11f9525e0f6e1afe755594b6..0000000000000000000000000000000000000000 --- a/spaces/seanpedrickcase/Light-PDF-Web-QA-Chatbot/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Light PDF web QA chatbot -emoji: 🌍 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Chat with a pdf file or web page using a light language model through a Gradio interface. Quick responses even just using CPU. diff --git a/spaces/seduerr/text_analytics/test/docs.py b/spaces/seduerr/text_analytics/test/docs.py deleted file mode 100644 index b71cd68992535df797225a94ad8e4edc0f577a30..0000000000000000000000000000000000000000 --- a/spaces/seduerr/text_analytics/test/docs.py +++ /dev/null @@ -1,21 +0,0 @@ -import os -from text_complexity_analyzer_cm.constants import BASE_DIRECTORY - -def write_documentation(directory): - elements = os.listdir(directory) - # Iterate over the elements found - for element in elements: - module_name = '.'.join(directory.split('/')[6:]) - if os.path.isfile(f'{directory}/{element}'): - if element == '__init__.py': # Write documentation for module - os.system(f'python -m pydoc -w {module_name}') - else: # Write documentation for file - file_name_no_extension = element.replace('.py', '') - os.system(f'python -m pydoc -w {module_name}.{file_name_no_extension}') - elif os.path.isdir(f'{directory}/{element}') and element != '__pycache__': - write_documentation(f'{directory}/{element}') - -if __name__ == "__main__": - os.chdir(f'{BASE_DIRECTORY}/text_complexity_analyzer_cm') - modules_path = f'{BASE_DIRECTORY}/text_complexity_analyzer_cm' - write_documentation(modules_path) \ No newline at end of file diff --git a/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/version.py b/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/version.py deleted file mode 100644 index b794fd409a5e3b3b65ad76a43d6a01a318877640..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything-api/GroundingDINO/groundingdino/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = '0.1.0' diff --git "a/spaces/shencc/gpt/crazy_functions/\350\260\267\346\255\214\346\243\200\347\264\242\345\260\217\345\212\251\346\211\213.py" "b/spaces/shencc/gpt/crazy_functions/\350\260\267\346\255\214\346\243\200\347\264\242\345\260\217\345\212\251\346\211\213.py" deleted file mode 100644 index 834f0799e1dca6328454ca7ec8eaa29b6a167199..0000000000000000000000000000000000000000 --- "a/spaces/shencc/gpt/crazy_functions/\350\260\267\346\255\214\346\243\200\347\264\242\345\260\217\345\212\251\346\211\213.py" +++ /dev/null @@ -1,108 +0,0 @@ -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -from toolbox import CatchException, report_execption, write_results_to_file -from toolbox import update_ui - -def get_meta_information(url, chatbot, history): - import requests - import arxiv - import difflib - from bs4 import BeautifulSoup - from toolbox import get_conf - proxies, = get_conf('proxies') - headers = { - 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36', - } - # 发送 GET 请求 - response = requests.get(url, proxies=proxies, headers=headers) - - # 解析网页内容 - soup = BeautifulSoup(response.text, "html.parser") - - def string_similar(s1, s2): - return difflib.SequenceMatcher(None, s1, s2).quick_ratio() - - profile = [] - # 获取所有文章的标题和作者 - for result in soup.select(".gs_ri"): - title = result.a.text.replace('\n', ' ').replace(' ', ' ') - author = result.select_one(".gs_a").text - try: - citation = result.select_one(".gs_fl > a[href*='cites']").text # 引用次数是链接中的文本,直接取出来 - except: - citation = 'cited by 0' - abstract = result.select_one(".gs_rs").text.strip() # 摘要在 .gs_rs 中的文本,需要清除首尾空格 - search = arxiv.Search( - query = title, - max_results = 1, - sort_by = arxiv.SortCriterion.Relevance, - ) - paper = next(search.results()) - if string_similar(title, paper.title) > 0.90: # same paper - abstract = paper.summary.replace('\n', ' ') - is_paper_in_arxiv = True - else: # different paper - abstract = abstract - is_paper_in_arxiv = False - paper = next(search.results()) - print(title) - print(author) - print(citation) - profile.append({ - 'title':title, - 'author':author, - 'citation':citation, - 'abstract':abstract, - 'is_paper_in_arxiv':is_paper_in_arxiv, - }) - - chatbot[-1] = [chatbot[-1][0], title + f'\n\n是否在arxiv中(不在arxiv中无法获取完整摘要):{is_paper_in_arxiv}\n\n' + abstract] - yield from update_ui(chatbot=chatbot, history=[]) # 刷新界面 - return profile - -@CatchException -def 谷歌检索小助手(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "分析用户提供的谷歌学术(google scholar)搜索页面中,出现的所有文章: binary-husky,插件初始化中..."]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - import arxiv - import math - from bs4 import BeautifulSoup - except: - report_execption(chatbot, history, - a = f"解析项目: {txt}", - b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade beautifulsoup4 arxiv```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - meta_paper_info_list = yield from get_meta_information(txt, chatbot, history) - batchsize = 5 - for batch in range(math.ceil(len(meta_paper_info_list)/batchsize)): - if len(meta_paper_info_list[:batchsize]) > 0: - i_say = "下面是一些学术文献的数据,提取出以下内容:" + \ - "1、英文题目;2、中文题目翻译;3、作者;4、arxiv公开(is_paper_in_arxiv);4、引用数量(cite);5、中文摘要翻译。" + \ - f"以下是信息源:{str(meta_paper_info_list[:batchsize])}" - - inputs_show_user = f"请分析此页面中出现的所有文章:{txt},这是第{batch+1}批" - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, inputs_show_user=inputs_show_user, - llm_kwargs=llm_kwargs, chatbot=chatbot, history=[], - sys_prompt="你是一个学术翻译,请从数据中提取信息。你必须使用Markdown表格。你必须逐个文献进行处理。" - ) - - history.extend([ f"第{batch+1}批", gpt_say ]) - meta_paper_info_list = meta_paper_info_list[batchsize:] - - chatbot.append(["状态?", - "已经全部完成,您可以试试让AI写一个Related Works,例如您可以继续输入Write an academic \"Related Works\" section about \"你搜索的研究领域\" for me."]) - msg = '正常' - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)); - yield from update_ui(chatbot=chatbot, history=history, msg=msg) # 刷新界面 diff --git a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/uvr5_pack/lib_v5/layers_33966KB.py b/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/uvr5_pack/lib_v5/layers_33966KB.py deleted file mode 100644 index 78e539250075d7fed2f349d05e3317dfe2c96804..0000000000000000000000000000000000000000 --- a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/uvr5_pack/lib_v5/layers_33966KB.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from uvr5_pack.lib_v5 import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv6 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv7 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - feat6 = self.conv6(x) - feat7 = self.conv7(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/shimizukawa/python-no-senpai/README.md b/spaces/shimizukawa/python-no-senpai/README.md deleted file mode 100644 index 0f582130d0afcc12b7072fdb0b6300fa8276110e..0000000000000000000000000000000000000000 --- a/spaces/shimizukawa/python-no-senpai/README.md +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: Document Search -emoji: 🐠 -colorFrom: green -colorTo: purple -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -# Required Environment variables - -- `INDEX_NAMES`: comma separated index names -- `QDRANT_URL`: Qdrant API endpoint -- `QDRANT_API_KEY`: Qdrant API Key -- `OPENAI_API_KEY`: OpenAI API Key - -# import GitHub issues - -## export from github -first, generate token on: https://github.com/settings/tokens - -``` -$ git clone https://github.com/kazamori/github-api-tools -$ pip install -e ./github-api-tools -$ export GITHUB_API_TOKEN="********" -$ gh-cli-issues --repository -$ ls -issues.json -``` - -## import from json - -``` -$ python store.py -l github_issue ../-issues.json -``` - -# import Wiki Pages - -## export from somewhere - -create `pages.json` like: -```json -{"id": , "title": , "content": , "ctime": ..., "user": , "url": "https:..."} -{"title": ...} -``` - -## import from json - -``` -$ python store.py -l wikipage ../pages.json -``` diff --git a/spaces/shivi/calm_seafoam/theme_dropdown.py b/spaces/shivi/calm_seafoam/theme_dropdown.py deleted file mode 100644 index 6235388fd00549553df44028f3ccf03e946994ea..0000000000000000000000000000000000000000 --- a/spaces/shivi/calm_seafoam/theme_dropdown.py +++ /dev/null @@ -1,57 +0,0 @@ -import os -import pathlib - -from gradio.themes.utils import ThemeAsset - - -def create_theme_dropdown(): - import gradio as gr - - asset_path = pathlib.Path(__file__).parent / "themes" - themes = [] - for theme_asset in os.listdir(str(asset_path)): - themes.append( - (ThemeAsset(theme_asset), gr.Theme.load(str(asset_path / theme_asset))) - ) - - def make_else_if(theme_asset): - return f""" - else if (theme == '{str(theme_asset[0].version)}') {{ - var theme_css = `{theme_asset[1]._get_theme_css()}` - }}""" - - head, tail = themes[0], themes[1:] - if_statement = f""" - if (theme == "{str(head[0].version)}") {{ - var theme_css = `{head[1]._get_theme_css()}` - }} {" ".join(make_else_if(t) for t in tail)} - """ - - latest_to_oldest = sorted([t[0] for t in themes], key=lambda asset: asset.version)[ - ::-1 - ] - latest_to_oldest = [str(t.version) for t in latest_to_oldest] - - component = gr.Dropdown( - choices=latest_to_oldest, - value=latest_to_oldest[0], - render=False, - label="Select Version", - ).style(container=False) - - return ( - component, - f""" - (theme) => {{ - if (!document.querySelector('.theme-css')) {{ - var theme_elem = document.createElement('style'); - theme_elem.classList.add('theme-css'); - document.head.appendChild(theme_elem); - }} else {{ - var theme_elem = document.querySelector('.theme-css'); - }} - {if_statement} - theme_elem.innerHTML = theme_css; - }} - """, - ) diff --git a/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/data/prefetch_dataloader.py b/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/data/prefetch_dataloader.py deleted file mode 100644 index 5088425050d4cc98114a9b93eb50ea60273f35a0..0000000000000000000000000000000000000000 --- a/spaces/shiwan10000/CodeFormer/CodeFormer/basicsr/data/prefetch_dataloader.py +++ /dev/null @@ -1,125 +0,0 @@ -import queue as Queue -import threading -import torch -from torch.utils.data import DataLoader - - -class PrefetchGenerator(threading.Thread): - """A general prefetch generator. - - Ref: - https://stackoverflow.com/questions/7323664/python-generator-pre-fetch - - Args: - generator: Python generator. - num_prefetch_queue (int): Number of prefetch queue. - """ - - def __init__(self, generator, num_prefetch_queue): - threading.Thread.__init__(self) - self.queue = Queue.Queue(num_prefetch_queue) - self.generator = generator - self.daemon = True - self.start() - - def run(self): - for item in self.generator: - self.queue.put(item) - self.queue.put(None) - - def __next__(self): - next_item = self.queue.get() - if next_item is None: - raise StopIteration - return next_item - - def __iter__(self): - return self - - -class PrefetchDataLoader(DataLoader): - """Prefetch version of dataloader. - - Ref: - https://github.com/IgorSusmelj/pytorch-styleguide/issues/5# - - TODO: - Need to test on single gpu and ddp (multi-gpu). There is a known issue in - ddp. - - Args: - num_prefetch_queue (int): Number of prefetch queue. - kwargs (dict): Other arguments for dataloader. - """ - - def __init__(self, num_prefetch_queue, **kwargs): - self.num_prefetch_queue = num_prefetch_queue - super(PrefetchDataLoader, self).__init__(**kwargs) - - def __iter__(self): - return PrefetchGenerator(super().__iter__(), self.num_prefetch_queue) - - -class CPUPrefetcher(): - """CPU prefetcher. - - Args: - loader: Dataloader. - """ - - def __init__(self, loader): - self.ori_loader = loader - self.loader = iter(loader) - - def next(self): - try: - return next(self.loader) - except StopIteration: - return None - - def reset(self): - self.loader = iter(self.ori_loader) - - -class CUDAPrefetcher(): - """CUDA prefetcher. - - Ref: - https://github.com/NVIDIA/apex/issues/304# - - It may consums more GPU memory. - - Args: - loader: Dataloader. - opt (dict): Options. - """ - - def __init__(self, loader, opt): - self.ori_loader = loader - self.loader = iter(loader) - self.opt = opt - self.stream = torch.cuda.Stream() - self.device = torch.device('cuda' if opt['num_gpu'] != 0 else 'cpu') - self.preload() - - def preload(self): - try: - self.batch = next(self.loader) # self.batch is a dict - except StopIteration: - self.batch = None - return None - # put tensors to gpu - with torch.cuda.stream(self.stream): - for k, v in self.batch.items(): - if torch.is_tensor(v): - self.batch[k] = self.batch[k].to(device=self.device, non_blocking=True) - - def next(self): - torch.cuda.current_stream().wait_stream(self.stream) - batch = self.batch - self.preload() - return batch - - def reset(self): - self.loader = iter(self.ori_loader) - self.preload() diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download 6 45 Lottery Landing On You Sub Indo !!LINK!!.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download 6 45 Lottery Landing On You Sub Indo !!LINK!!.md deleted file mode 100644 index d6f7dedd4b66c6324821a8e5098a99447397f5ca..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download 6 45 Lottery Landing On You Sub Indo !!LINK!!.md +++ /dev/null @@ -1,42 +0,0 @@ -
      -

      How to Download 6/45 Lottery Landing on You Sub Indo

      -

      If you are a fan of Korean dramas, you might have heard of 6/45 Lottery Landing on You, a spin-off of the hit series Crash Landing on You. In this article, we will tell you what this show is about, why it is popular, and how you can download it with subtitles in Indonesian.

      -

      download 6 45 lottery landing on you sub indo


      Download File ››››› https://ssurll.com/2uNSMH



      -

      What is 6/45 Lottery Landing on You?

      -

      6/45 Lottery Landing on You is a Korean variety show that features the cast of Crash Landing on You, a romantic comedy drama about a South Korean heiress who accidentally lands in North Korea and falls in love with a soldier. The variety show follows the actors as they participate in a lottery game based on the Nanum Lotto 6/45, Korea's main national lottery game. The game involves picking six numbers from 1 to 45, and the winners get to enjoy various prizes and experiences related to Crash Landing on You.

      -

      Why is it popular and where can you watch it?

      -

      6/45 Lottery Landing on You is popular because it gives fans a chance to see their favorite actors from Crash Landing on You in a different and fun setting. The show also reveals behind-the-scenes stories, trivia, and secrets from the drama. Some of the prizes and experiences that the winners get to enjoy include visiting the filming locations, meeting the supporting actors, eating North Korean dishes, and wearing North Korean uniforms.

      -

      You can watch 6/45 Lottery Landing on You on tvN, a Korean cable channel that also aired Crash Landing on You. The show airs every Saturday at 20:40 local time, starting from June 18, 2023. However, if you are not in Korea or do not have access to tvN, you might want to download the show with subtitles in Indonesian. Here are two ways you can do that.

      -

      How to Download 6/45 Lottery Landing on You Sub Indo

      -

      Option 1: Use a streaming service that offers subtitles in Indonesian

      -

      One of the easiest ways to download 6/45 Lottery Landing on You with sub indo is to use a streaming service that has the show in its library and offers subtitles in Indonesian. Some of the streaming services that have this option are:

      -
        -
      • Bilibili: Bilibili is a Chinese video-sharing platform that has a large collection of Asian dramas, movies, animations, and variety shows. It also has user-generated subtitles in various languages, including Indonesian. To download 6/45 Lottery Landing on You with sub indo from Bilibili, you need to register for an account, search for the show's title, select the episode you want, click on the download icon at the bottom right corner of the video player, and choose the subtitle language.
      • -
      • DramaSubIndo: DramaSubIndo is a website that provides free downloads of Korean dramas and movies with subtitles in Indonesian. It also has other Asian dramas and movies from China , Japan, Thailand, and Taiwan. To download 6/45 Lottery Landing on You with sub indo from DramaSubIndo, you need to visit the website, search for the show's title, select the episode you want, click on the download link, and choose the subtitle file.
      • -
      • Viu: Viu is a Hong Kong-based streaming service that offers legal and licensed content from Korea, Japan, China, Thailand, and other Asian countries. It also has subtitles in various languages, including Indonesian. To download 6/45 Lottery Landing on You with sub indo from Viu, you need to subscribe to a premium plan, download the Viu app on your device, search for the show's title, select the episode you want, tap on the download icon at the top right corner of the screen, and choose the subtitle language.
      • -
      -

      These are some of the streaming services that you can use to download 6/45 Lottery Landing on You with sub indo. However, you should be aware that some of these services may not be available in your region, or may require a VPN to access. You should also check the quality and accuracy of the subtitles before downloading them.

      -

      -

      Option 2: Use a torrent site that has 6/45 Lottery Landing on You with sub indo

      -

      Another way to download 6/45 Lottery Landing on You with sub indo is to use a torrent site that has the show with subtitles in Indonesian. A torrent site is a website that allows users to share and download files using a peer-to-peer network. Some of the torrent sites that have 6/45 Lottery Landing on You with sub indo are:

      -
        -
      • Nyaa: Nyaa is a popular torrent site that specializes in anime, manga, games, and Asian media. It has a large community of fansubbers who provide subtitles in various languages, including Indonesian. To download 6/45 Lottery Landing on You with sub indo from Nyaa, you need to visit the website, search for the show's title, select the torrent file that has sub indo in its name, download it using a torrent client such as BitTorrent or uTorrent, and open it with a video player that supports subtitles.
      • -
      • Dramaindo: Dramaindo is a torrent site that focuses on Korean dramas and movies with subtitles in Indonesian. It also has other Asian dramas and movies from China, Japan, Thailand, and Taiwan. To download 6/45 Lottery Landing on You with sub indo from Dramaindo, you need to visit the website, search for the show's title, select the torrent file that has sub indo in its name, download it using a torrent client such as BitTorrent or uTorrent, and open it with a video player that supports subtitles.
      • -
      • Kissasian: Kissasian is a torrent site that offers a wide range of Asian dramas and movies with subtitles in various languages, including Indonesian. It also has anime, cartoons, and shows from other countries. To download 6/45 Lottery Landing on You with sub indo from Kissasian, you need to visit the website, search for the show's title, select the episode you want, click on the download button at the bottom of the video player, and choose the subtitle language.
      • -
      -

      These are some of the torrent sites that you can use to download 6/45 Lottery Landing on You with sub indo. However, you should be careful when using these sites as they may contain malware or viruses that can harm your device. You should also use a VPN to protect your privacy and avoid legal issues when downloading from these sites.

      -

      Conclusion

      -

      In this article, we have shown you how to download 6/45 Lottery Landing on You with sub indo using two different methods: streaming services and torrent sites. We have also listed some of the streaming services and torrent sites that have 6/45 Lottery Landing on You with sub indo in their library. However, you should always check the availability , the quality, and the legality of the subtitles before downloading them. We hope that this article has helped you to enjoy 6/45 Lottery Landing on You with sub indo. Happy watching!

      -

      FAQs

      -

      What is 6/45 Lottery?

      -

      6/45 Lottery is a game based on the Nanum Lotto 6/45, Korea's main national lottery game. The game involves picking six numbers from 1 to 45, and the winners get to enjoy various prizes and experiences related to Crash Landing on You.

      -

      What is Crash Landing on You?

      -

      Crash Landing on You is a romantic comedy drama that aired on tvN from December 2019 to February 2020. It tells the story of a South Korean heiress who accidentally lands in North Korea and falls in love with a soldier. It stars Hyun Bin, Son Ye-jin, Kim Jung-hyun, and Seo Ji-hye.

      -

      How are 6/45 Lottery and Crash Landing on You related?

      -

      6/45 Lottery is a spin-off of Crash Landing on You that features the cast of the drama as they participate in a lottery game. The show also reveals behind-the-scenes stories, trivia, and secrets from the drama.

      -

      How many episodes are there in 6/45 Lottery Landing on You?

      -

      There are 12 episodes in 6/45 Lottery Landing on You, each lasting about an hour. The show airs every Saturday at 20:40 local time, starting from June 18, 2023.

      -

      Is 6/45 Lottery Landing on You based on a true story?

      -

      No, 6/45 Lottery Landing on You is not based on a true story. It is a fictional variety show that uses the characters and settings from Crash Landing on You as a premise for a lottery game.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/simulate-tests/unity-test/index.html b/spaces/simulate-tests/unity-test/index.html deleted file mode 100644 index 918e851d9dd1baf9e4fb4f067fd979d432472161..0000000000000000000000000000000000000000 --- a/spaces/simulate-tests/unity-test/index.html +++ /dev/null @@ -1,24 +0,0 @@ - - - - - - My static Space - - - -
      -

      Welcome to your static Space!

      -

      - You can modify this app directly by editing index.html in the - Files and versions tab. -

      -

      - Also don't forget to check the - Spaces documentation. -

      -
      - - diff --git a/spaces/smallyu/dalle-mini/html2canvas.js b/spaces/smallyu/dalle-mini/html2canvas.js deleted file mode 100644 index 96e2dc5707b1a584ff7b3b583aea7c6c18d4ea76..0000000000000000000000000000000000000000 --- a/spaces/smallyu/dalle-mini/html2canvas.js +++ /dev/null @@ -1,7756 +0,0 @@ -/*! - * html2canvas 1.4.1 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ -(function (global, factory) { - typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory() : - typeof define === 'function' && define.amd ? define(factory) : - (global = typeof globalThis !== 'undefined' ? globalThis : global || self, global.html2canvas = factory()); -}(this, (function () { 'use strict'; - - /*! ***************************************************************************** - Copyright (c) Microsoft Corporation. - - Permission to use, copy, modify, and/or distribute this software for any - purpose with or without fee is hereby granted. - - THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH - REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY - AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, - INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM - LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR - OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR - PERFORMANCE OF THIS SOFTWARE. - ***************************************************************************** */ - /* global Reflect, Promise */ - - var extendStatics = function(d, b) { - extendStatics = Object.setPrototypeOf || - ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) || - function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; }; - return extendStatics(d, b); - }; - - function __extends(d, b) { - if (typeof b !== "function" && b !== null) - throw new TypeError("Class extends value " + String(b) + " is not a constructor or null"); - extendStatics(d, b); - function __() { this.constructor = d; } - d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __()); - } - - var __assign = function() { - __assign = Object.assign || function __assign(t) { - for (var s, i = 1, n = arguments.length; i < n; i++) { - s = arguments[i]; - for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p]; - } - return t; - }; - return __assign.apply(this, arguments); - }; - - function __awaiter(thisArg, _arguments, P, generator) { - function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); } - return new (P || (P = Promise))(function (resolve, reject) { - function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } } - function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } } - function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); } - step((generator = generator.apply(thisArg, _arguments || [])).next()); - }); - } - - function __generator(thisArg, body) { - var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g; - return g = { next: verb(0), "throw": verb(1), "return": verb(2) }, typeof Symbol === "function" && (g[Symbol.iterator] = function() { return this; }), g; - function verb(n) { return function (v) { return step([n, v]); }; } - function step(op) { - if (f) throw new TypeError("Generator is already executing."); - while (_) try { - if (f = 1, y && (t = op[0] & 2 ? y["return"] : op[0] ? y["throw"] || ((t = y["return"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t; - if (y = 0, t) op = [op[0] & 2, t.value]; - switch (op[0]) { - case 0: case 1: t = op; break; - case 4: _.label++; return { value: op[1], done: false }; - case 5: _.label++; y = op[1]; op = [0]; continue; - case 7: op = _.ops.pop(); _.trys.pop(); continue; - default: - if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; } - if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; } - if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; } - if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; } - if (t[2]) _.ops.pop(); - _.trys.pop(); continue; - } - op = body.call(thisArg, _); - } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; } - if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true }; - } - } - - function __spreadArray(to, from, pack) { - if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) { - if (ar || !(i in from)) { - if (!ar) ar = Array.prototype.slice.call(from, 0, i); - ar[i] = from[i]; - } - } - return to.concat(ar || from); - } - - var Bounds = /** @class */ (function () { - function Bounds(left, top, width, height) { - this.left = left; - this.top = top; - this.width = width; - this.height = height; - } - Bounds.prototype.add = function (x, y, w, h) { - return new Bounds(this.left + x, this.top + y, this.width + w, this.height + h); - }; - Bounds.fromClientRect = function (context, clientRect) { - return new Bounds(clientRect.left + context.windowBounds.left, clientRect.top + context.windowBounds.top, clientRect.width, clientRect.height); - }; - Bounds.fromDOMRectList = function (context, domRectList) { - var domRect = Array.from(domRectList).find(function (rect) { return rect.width !== 0; }); - return domRect - ? new Bounds(domRect.left + context.windowBounds.left, domRect.top + context.windowBounds.top, domRect.width, domRect.height) - : Bounds.EMPTY; - }; - Bounds.EMPTY = new Bounds(0, 0, 0, 0); - return Bounds; - }()); - var parseBounds = function (context, node) { - return Bounds.fromClientRect(context, node.getBoundingClientRect()); - }; - var parseDocumentSize = function (document) { - var body = document.body; - var documentElement = document.documentElement; - if (!body || !documentElement) { - throw new Error("Unable to get document size"); - } - var width = Math.max(Math.max(body.scrollWidth, documentElement.scrollWidth), Math.max(body.offsetWidth, documentElement.offsetWidth), Math.max(body.clientWidth, documentElement.clientWidth)); - var height = Math.max(Math.max(body.scrollHeight, documentElement.scrollHeight), Math.max(body.offsetHeight, documentElement.offsetHeight), Math.max(body.clientHeight, documentElement.clientHeight)); - return new Bounds(0, 0, width, height); - }; - - /* - * css-line-break 2.1.0 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var toCodePoints$1 = function (str) { - var codePoints = []; - var i = 0; - var length = str.length; - while (i < length) { - var value = str.charCodeAt(i++); - if (value >= 0xd800 && value <= 0xdbff && i < length) { - var extra = str.charCodeAt(i++); - if ((extra & 0xfc00) === 0xdc00) { - codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000); - } - else { - codePoints.push(value); - i--; - } - } - else { - codePoints.push(value); - } - } - return codePoints; - }; - var fromCodePoint$1 = function () { - var codePoints = []; - for (var _i = 0; _i < arguments.length; _i++) { - codePoints[_i] = arguments[_i]; - } - if (String.fromCodePoint) { - return String.fromCodePoint.apply(String, codePoints); - } - var length = codePoints.length; - if (!length) { - return ''; - } - var codeUnits = []; - var index = -1; - var result = ''; - while (++index < length) { - var codePoint = codePoints[index]; - if (codePoint <= 0xffff) { - codeUnits.push(codePoint); - } - else { - codePoint -= 0x10000; - codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00); - } - if (index + 1 === length || codeUnits.length > 0x4000) { - result += String.fromCharCode.apply(String, codeUnits); - codeUnits.length = 0; - } - } - return result; - }; - var chars$2 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$2 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$2 = 0; i$2 < chars$2.length; i$2++) { - lookup$2[chars$2.charCodeAt(i$2)] = i$2; - } - - /* - * utrie 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$1$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$1$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$1$1 = 0; i$1$1 < chars$1$1.length; i$1$1++) { - lookup$1$1[chars$1$1.charCodeAt(i$1$1)] = i$1$1; - } - var decode$1 = function (base64) { - var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4; - if (base64[base64.length - 1] === '=') { - bufferLength--; - if (base64[base64.length - 2] === '=') { - bufferLength--; - } - } - var buffer = typeof ArrayBuffer !== 'undefined' && - typeof Uint8Array !== 'undefined' && - typeof Uint8Array.prototype.slice !== 'undefined' - ? new ArrayBuffer(bufferLength) - : new Array(bufferLength); - var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer); - for (i = 0; i < len; i += 4) { - encoded1 = lookup$1$1[base64.charCodeAt(i)]; - encoded2 = lookup$1$1[base64.charCodeAt(i + 1)]; - encoded3 = lookup$1$1[base64.charCodeAt(i + 2)]; - encoded4 = lookup$1$1[base64.charCodeAt(i + 3)]; - bytes[p++] = (encoded1 << 2) | (encoded2 >> 4); - bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2); - bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63); - } - return buffer; - }; - var polyUint16Array$1 = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 2) { - bytes.push((buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - var polyUint32Array$1 = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 4) { - bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - - /** Shift size for getting the index-2 table offset. */ - var UTRIE2_SHIFT_2$1 = 5; - /** Shift size for getting the index-1 table offset. */ - var UTRIE2_SHIFT_1$1 = 6 + 5; - /** - * Shift size for shifting left the index array values. - * Increases possible data size with 16-bit index values at the cost - * of compactability. - * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY. - */ - var UTRIE2_INDEX_SHIFT$1 = 2; - /** - * Difference between the two shift sizes, - * for getting an index-1 offset from an index-2 offset. 6=11-5 - */ - var UTRIE2_SHIFT_1_2$1 = UTRIE2_SHIFT_1$1 - UTRIE2_SHIFT_2$1; - /** - * The part of the index-2 table for U+D800..U+DBFF stores values for - * lead surrogate code _units_ not code _points_. - * Values for lead surrogate code _points_ are indexed with this portion of the table. - * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.) - */ - var UTRIE2_LSCP_INDEX_2_OFFSET$1 = 0x10000 >> UTRIE2_SHIFT_2$1; - /** Number of entries in a data block. 32=0x20 */ - var UTRIE2_DATA_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_2$1; - /** Mask for getting the lower bits for the in-data-block offset. */ - var UTRIE2_DATA_MASK$1 = UTRIE2_DATA_BLOCK_LENGTH$1 - 1; - var UTRIE2_LSCP_INDEX_2_LENGTH$1 = 0x400 >> UTRIE2_SHIFT_2$1; - /** Count the lengths of both BMP pieces. 2080=0x820 */ - var UTRIE2_INDEX_2_BMP_LENGTH$1 = UTRIE2_LSCP_INDEX_2_OFFSET$1 + UTRIE2_LSCP_INDEX_2_LENGTH$1; - /** - * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820. - * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2. - */ - var UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 = UTRIE2_INDEX_2_BMP_LENGTH$1; - var UTRIE2_UTF8_2B_INDEX_2_LENGTH$1 = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */ - /** - * The index-1 table, only used for supplementary code points, at offset 2112=0x840. - * Variable length, for code points up to highStart, where the last single-value range starts. - * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1. - * (For 0x100000 supplementary code points U+10000..U+10ffff.) - * - * The part of the index-2 table for supplementary code points starts - * after this index-1 table. - * - * Both the index-1 table and the following part of the index-2 table - * are omitted completely if there is only BMP data. - */ - var UTRIE2_INDEX_1_OFFSET$1 = UTRIE2_UTF8_2B_INDEX_2_OFFSET$1 + UTRIE2_UTF8_2B_INDEX_2_LENGTH$1; - /** - * Number of index-1 entries for the BMP. 32=0x20 - * This part of the index-1 table is omitted from the serialized form. - */ - var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 = 0x10000 >> UTRIE2_SHIFT_1$1; - /** Number of entries in an index-2 block. 64=0x40 */ - var UTRIE2_INDEX_2_BLOCK_LENGTH$1 = 1 << UTRIE2_SHIFT_1_2$1; - /** Mask for getting the lower bits for the in-index-2-block offset. */ - var UTRIE2_INDEX_2_MASK$1 = UTRIE2_INDEX_2_BLOCK_LENGTH$1 - 1; - var slice16$1 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint16Array(Array.prototype.slice.call(view, start, end)); - }; - var slice32$1 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint32Array(Array.prototype.slice.call(view, start, end)); - }; - var createTrieFromBase64$1 = function (base64, _byteLength) { - var buffer = decode$1(base64); - var view32 = Array.isArray(buffer) ? polyUint32Array$1(buffer) : new Uint32Array(buffer); - var view16 = Array.isArray(buffer) ? polyUint16Array$1(buffer) : new Uint16Array(buffer); - var headerLength = 24; - var index = slice16$1(view16, headerLength / 2, view32[4] / 2); - var data = view32[5] === 2 - ? slice16$1(view16, (headerLength + view32[4]) / 2) - : slice32$1(view32, Math.ceil((headerLength + view32[4]) / 4)); - return new Trie$1(view32[0], view32[1], view32[2], view32[3], index, data); - }; - var Trie$1 = /** @class */ (function () { - function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) { - this.initialValue = initialValue; - this.errorValue = errorValue; - this.highStart = highStart; - this.highValueIndex = highValueIndex; - this.index = index; - this.data = data; - } - /** - * Get the value for a code point as stored in the Trie. - * - * @param codePoint the code point - * @return the value - */ - Trie.prototype.get = function (codePoint) { - var ix; - if (codePoint >= 0) { - if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) { - // Ordinary BMP code point, excluding leading surrogates. - // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index. - // 16 bit data is stored in the index array itself. - ix = this.index[codePoint >> UTRIE2_SHIFT_2$1]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint <= 0xffff) { - // Lead Surrogate Code Point. A Separate index section is stored for - // lead surrogate code units and code points. - // The main index has the code unit data. - // For this function, we need the code point data. - // Note: this expression could be refactored for slightly improved efficiency, but - // surrogate code points will be so rare in practice that it's not worth it. - ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET$1 + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2$1)]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint < this.highStart) { - // Supplemental code point, use two-level lookup. - ix = UTRIE2_INDEX_1_OFFSET$1 - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH$1 + (codePoint >> UTRIE2_SHIFT_1$1); - ix = this.index[ix]; - ix += (codePoint >> UTRIE2_SHIFT_2$1) & UTRIE2_INDEX_2_MASK$1; - ix = this.index[ix]; - ix = (ix << UTRIE2_INDEX_SHIFT$1) + (codePoint & UTRIE2_DATA_MASK$1); - return this.data[ix]; - } - if (codePoint <= 0x10ffff) { - return this.data[this.highValueIndex]; - } - } - // Fall through. The code point is outside of the legal range of 0..0x10ffff. - return this.errorValue; - }; - return Trie; - }()); - - /* - * base64-arraybuffer 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$3 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$3 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$3 = 0; i$3 < chars$3.length; i$3++) { - lookup$3[chars$3.charCodeAt(i$3)] = i$3; - } - - var base64$1 = 'KwAAAAAAAAAACA4AUD0AADAgAAACAAAAAAAIABAAGABAAEgAUABYAGAAaABgAGgAYgBqAF8AZwBgAGgAcQB5AHUAfQCFAI0AlQCdAKIAqgCyALoAYABoAGAAaABgAGgAwgDKAGAAaADGAM4A0wDbAOEA6QDxAPkAAQEJAQ8BFwF1AH0AHAEkASwBNAE6AUIBQQFJAVEBWQFhAWgBcAF4ATAAgAGGAY4BlQGXAZ8BpwGvAbUBvQHFAc0B0wHbAeMB6wHxAfkBAQIJAvEBEQIZAiECKQIxAjgCQAJGAk4CVgJeAmQCbAJ0AnwCgQKJApECmQKgAqgCsAK4ArwCxAIwAMwC0wLbAjAA4wLrAvMC+AIAAwcDDwMwABcDHQMlAy0DNQN1AD0DQQNJA0kDSQNRA1EDVwNZA1kDdQB1AGEDdQBpA20DdQN1AHsDdQCBA4kDkQN1AHUAmQOhA3UAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AKYDrgN1AHUAtgO+A8YDzgPWAxcD3gPjA+sD8wN1AHUA+wMDBAkEdQANBBUEHQQlBCoEFwMyBDgEYABABBcDSARQBFgEYARoBDAAcAQzAXgEgASIBJAEdQCXBHUAnwSnBK4EtgS6BMIEyAR1AHUAdQB1AHUAdQCVANAEYABgAGAAYABgAGAAYABgANgEYADcBOQEYADsBPQE/AQEBQwFFAUcBSQFLAU0BWQEPAVEBUsFUwVbBWAAYgVgAGoFcgV6BYIFigWRBWAAmQWfBaYFYABgAGAAYABgAKoFYACxBbAFuQW6BcEFwQXHBcEFwQXPBdMF2wXjBeoF8gX6BQIGCgYSBhoGIgYqBjIGOgZgAD4GRgZMBmAAUwZaBmAAYABgAGAAYABgAGAAYABgAGAAYABgAGIGYABpBnAGYABgAGAAYABgAGAAYABgAGAAYAB4Bn8GhQZgAGAAYAB1AHcDFQSLBmAAYABgAJMGdQA9A3UAmwajBqsGqwaVALMGuwbDBjAAywbSBtIG1QbSBtIG0gbSBtIG0gbdBuMG6wbzBvsGAwcLBxMHAwcbByMHJwcsBywHMQcsB9IGOAdAB0gHTgfSBkgHVgfSBtIG0gbSBtIG0gbSBtIG0gbSBiwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdgAGAALAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAdbB2MHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB2kH0gZwB64EdQB1AHUAdQB1AHUAdQB1AHUHfQdgAIUHjQd1AHUAlQedB2AAYAClB6sHYACzB7YHvgfGB3UAzgfWBzMB3gfmB1EB7gf1B/0HlQENAQUIDQh1ABUIHQglCBcDLQg1CD0IRQhNCEEDUwh1AHUAdQBbCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIaQhjCGQIZQhmCGcIaAhpCGMIZAhlCGYIZwhoCGkIYwhkCGUIZghnCGgIcAh3CHoIMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIgggwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAALAcsBywHLAcsBywHLAcsBywHLAcsB4oILAcsB44I0gaWCJ4Ipgh1AHUAqgiyCHUAdQB1AHUAdQB1AHUAdQB1AHUAtwh8AXUAvwh1AMUIyQjRCNkI4AjoCHUAdQB1AO4I9gj+CAYJDgkTCS0HGwkjCYIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiCCIIIggiAAIAAAAFAAYABgAGIAXwBgAHEAdQBFAJUAogCyAKAAYABgAEIA4ABGANMA4QDxAMEBDwE1AFwBLAE6AQEBUQF4QkhCmEKoQrhCgAHIQsAB0MLAAcABwAHAAeDC6ABoAHDCwMMAAcABwAHAAdDDGMMAAcAB6MM4wwjDWMNow3jDaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAGgAaABoAEjDqABWw6bDqABpg6gAaABoAHcDvwOPA+gAaABfA/8DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DvwO/A78DpcPAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcAB9cPKwkyCToJMAB1AHUAdQBCCUoJTQl1AFUJXAljCWcJawkwADAAMAAwAHMJdQB2CX4JdQCECYoJjgmWCXUAngkwAGAAYABxAHUApgn3A64JtAl1ALkJdQDACTAAMAAwADAAdQB1AHUAdQB1AHUAdQB1AHUAowYNBMUIMAAwADAAMADICcsJ0wnZCRUE4QkwAOkJ8An4CTAAMAB1AAAKvwh1AAgKDwoXCh8KdQAwACcKLgp1ADYKqAmICT4KRgowADAAdQB1AE4KMAB1AFYKdQBeCnUAZQowADAAMAAwADAAMAAwADAAMAAVBHUAbQowADAAdQC5CXUKMAAwAHwBxAijBogEMgF9CoQKiASMCpQKmgqIBKIKqgquCogEDQG2Cr4KxgrLCjAAMADTCtsKCgHjCusK8Qr5CgELMAAwADAAMAB1AIsECQsRC3UANAEZCzAAMAAwADAAMAB1ACELKQswAHUANAExCzkLdQBBC0kLMABRC1kLMAAwADAAMAAwADAAdQBhCzAAMAAwAGAAYABpC3ELdwt/CzAAMACHC4sLkwubC58Lpwt1AK4Ltgt1APsDMAAwADAAMAAwADAAMAAwAL4LwwvLC9IL1wvdCzAAMADlC+kL8Qv5C/8LSQswADAAMAAwADAAMAAwADAAMAAHDDAAMAAwADAAMAAODBYMHgx1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1ACYMMAAwADAAdQB1AHUALgx1AHUAdQB1AHUAdQA2DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AD4MdQBGDHUAdQB1AHUAdQB1AEkMdQB1AHUAdQB1AFAMMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQBYDHUAdQB1AF8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUA+wMVBGcMMAAwAHwBbwx1AHcMfwyHDI8MMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAYABgAJcMMAAwADAAdQB1AJ8MlQClDDAAMACtDCwHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsB7UMLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHdQB1AHUAdQB1AHUAdQB1AHUAdQB1AHUAdQB1AA0EMAC9DDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAsBywHLAcsBywHLAcsBywHLQcwAMEMyAwsBywHLAcsBywHLAcsBywHLAcsBywHzAwwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwAHUAdQB1ANQM2QzhDDAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMABgAGAAYABgAGAAYABgAOkMYADxDGAA+AwADQYNYABhCWAAYAAODTAAMAAwADAAFg1gAGAAHg37AzAAMAAwADAAYABgACYNYAAsDTQNPA1gAEMNPg1LDWAAYABgAGAAYABgAGAAYABgAGAAUg1aDYsGVglhDV0NcQBnDW0NdQ15DWAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAlQCBDZUAiA2PDZcNMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAnw2nDTAAMAAwADAAMAAwAHUArw23DTAAMAAwADAAMAAwADAAMAAwADAAMAB1AL8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAB1AHUAdQB1AHUAdQDHDTAAYABgAM8NMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA1w11ANwNMAAwAD0B5A0wADAAMAAwADAAMADsDfQN/A0EDgwOFA4wABsOMAAwADAAMAAwADAAMAAwANIG0gbSBtIG0gbSBtIG0gYjDigOwQUuDsEFMw7SBjoO0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGQg5KDlIOVg7SBtIGXg5lDm0OdQ7SBtIGfQ6EDooOjQ6UDtIGmg6hDtIG0gaoDqwO0ga0DrwO0gZgAGAAYADEDmAAYAAkBtIGzA5gANIOYADaDokO0gbSBt8O5w7SBu8O0gb1DvwO0gZgAGAAxA7SBtIG0gbSBtIGYABgAGAAYAAED2AAsAUMD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHJA8sBywHLAcsBywHLAccDywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywPLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAc0D9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAccD9IG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIGFA8sBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHLAcsBywHPA/SBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gbSBtIG0gYUD0QPlQCVAJUAMAAwADAAMACVAJUAlQCVAJUAlQCVAEwPMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAA//8EAAQABAAEAAQABAAEAAQABAANAAMAAQABAAIABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQACgATABcAHgAbABoAHgAXABYAEgAeABsAGAAPABgAHABLAEsASwBLAEsASwBLAEsASwBLABgAGAAeAB4AHgATAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABYAGwASAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWAA0AEQAeAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAFAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJABYAGgAbABsAGwAeAB0AHQAeAE8AFwAeAA0AHgAeABoAGwBPAE8ADgBQAB0AHQAdAE8ATwAXAE8ATwBPABYAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AHgAeAFAATwBAAE8ATwBPAEAATwBQAFAATwBQAB4AHgAeAB4AHgAeAB0AHQAdAB0AHgAdAB4ADgBQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgBQAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAJAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAkACQAJAAkACQAJAAkABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAFAAHgAeAB4AKwArAFAAUABQAFAAGABQACsAKwArACsAHgAeAFAAHgBQAFAAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUAAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAYAA0AKwArAB4AHgAbACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAB4ABAAEAB4ABAAEABMABAArACsAKwArACsAKwArACsAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAKwArACsAKwBWAFYAVgBWAB4AHgArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AGgAaABoAGAAYAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQAEwAEACsAEwATAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABLAEsASwBLAEsASwBLAEsASwBLABoAGQAZAB4AUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQABMAUAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABABQAFAABAAEAB4ABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUAAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAFAABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQAUABQAB4AHgAYABMAUAArACsABAAbABsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAFAABAAEAAQABAAEAFAABAAEAAQAUAAEAAQABAAEAAQAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArACsAHgArAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAUAAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEAA0ADQBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUAArACsAKwBQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABABQACsAKwArACsAKwArACsAKwAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUAAaABoAUABQAFAAUABQAEwAHgAbAFAAHgAEACsAKwAEAAQABAArAFAAUABQAFAAUABQACsAKwArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQACsAUABQACsAKwAEACsABAAEAAQABAAEACsAKwArACsABAAEACsAKwAEAAQABAArACsAKwAEACsAKwArACsAKwArACsAUABQAFAAUAArAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLAAQABABQAFAAUAAEAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsAKwAEAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAArACsAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AGwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAKwArACsAKwArAAQABAAEACsAKwArACsAUABQACsAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAAQAUAArAFAAUABQAFAAUABQACsAKwArAFAAUABQACsAUABQAFAAUAArACsAKwBQAFAAKwBQACsAUABQACsAKwArAFAAUAArACsAKwBQAFAAUAArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArAAQABAAEAAQABAArACsAKwAEAAQABAArAAQABAAEAAQAKwArAFAAKwArACsAKwArACsABAArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAHgAeAB4AHgAeAB4AGwAeACsAKwArACsAKwAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAUABQAFAAKwArACsAKwArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwAOAFAAUABQAFAAUABQAFAAHgBQAAQABAAEAA4AUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAKwArAAQAUAAEAAQABAAEAAQABAAEACsABAAEAAQAKwAEAAQABAAEACsAKwArACsAKwArACsABAAEACsAKwArACsAKwArACsAUAArAFAAUAAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAFAABAAEAAQABAAEAAQABAArAAQABAAEACsABAAEAAQABABQAB4AKwArACsAKwBQAFAAUAAEAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQABoAUABQAFAAUABQAFAAKwAEAAQABAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQACsAUAArACsAUABQAFAAUABQAFAAUAArACsAKwAEACsAKwArACsABAAEAAQABAAEAAQAKwAEACsABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArAAQABAAeACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAXAAqACoAKgAqACoAKgAqACsAKwArACsAGwBcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAeAEsASwBLAEsASwBLAEsASwBLAEsADQANACsAKwArACsAKwBcAFwAKwBcACsAXABcAFwAXABcACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAXAArAFwAXABcAFwAXABcAFwAXABcAFwAKgBcAFwAKgAqACoAKgAqACoAKgAqACoAXAArACsAXABcAFwAXABcACsAXAArACoAKgAqACoAKgAqACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwBcAFwAXABcAFAADgAOAA4ADgAeAA4ADgAJAA4ADgANAAkAEwATABMAEwATAAkAHgATAB4AHgAeAAQABAAeAB4AHgAeAB4AHgBLAEsASwBLAEsASwBLAEsASwBLAFAAUABQAFAAUABQAFAAUABQAFAADQAEAB4ABAAeAAQAFgARABYAEQAEAAQAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQADQAEAAQABAAEAAQADQAEAAQAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAA0ADQAeAB4AHgAeAB4AHgAEAB4AHgAeAB4AHgAeACsAHgAeAA4ADgANAA4AHgAeAB4AHgAeAAkACQArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgBcAEsASwBLAEsASwBLAEsASwBLAEsADQANAB4AHgAeAB4AXABcAFwAXABcAFwAKgAqACoAKgBcAFwAXABcACoAKgAqAFwAKgAqACoAXABcACoAKgAqACoAKgAqACoAXABcAFwAKgAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKgAqAFwAKgBLAEsASwBLAEsASwBLAEsASwBLACoAKgAqACoAKgAqAFAAUABQAFAAUABQACsAUAArACsAKwArACsAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAKwBQACsAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsABAAEAAQAHgANAB4AHgAeAB4AHgAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUAArACsADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAWABEAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQANAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAANAA0AKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUAArAAQABAArACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqAA0ADQAVAFwADQAeAA0AGwBcACoAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwAeAB4AEwATAA0ADQAOAB4AEwATAB4ABAAEAAQACQArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAHgArACsAKwATABMASwBLAEsASwBLAEsASwBLAEsASwBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAXABcAFwAXABcACsAKwArACsAKwArACsAKwArACsAKwBcAFwAXABcAFwAXABcAFwAXABcAFwAXAArACsAKwArAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAXAArACsAKwAqACoAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAArACsAHgAeAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcACoAKgAqACoAKgAqACoAKgAqACoAKwAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKwArAAQASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACoAKgAqACoAKgAqACoAXAAqACoAKgAqACoAKgArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABABQAFAAUABQAFAAUABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwANAA0AHgANAA0ADQANAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAEAAQABAAEAAQAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwAeAB4AHgAeAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArAA0ADQANAA0ADQBLAEsASwBLAEsASwBLAEsASwBLACsAKwArAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAA0ADQBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUAAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArAAQABAAEAB4ABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAAQAUABQAFAAUABQAFAABABQAFAABAAEAAQAUAArACsAKwArACsABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQACsAUAArAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAFAAUABQACsAHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQACsAKwAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQACsAHgAeAB4AHgAeAB4AHgAOAB4AKwANAA0ADQANAA0ADQANAAkADQANAA0ACAAEAAsABAAEAA0ACQANAA0ADAAdAB0AHgAXABcAFgAXABcAFwAWABcAHQAdAB4AHgAUABQAFAANAAEAAQAEAAQABAAEAAQACQAaABoAGgAaABoAGgAaABoAHgAXABcAHQAVABUAHgAeAB4AHgAeAB4AGAAWABEAFQAVABUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ADQAeAA0ADQANAA0AHgANAA0ADQAHAB4AHgAeAB4AKwAEAAQABAAEAAQABAAEAAQABAAEAFAAUAArACsATwBQAFAAUABQAFAAHgAeAB4AFgARAE8AUABPAE8ATwBPAFAAUABQAFAAUAAeAB4AHgAWABEAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArABsAGwAbABsAGwAbABsAGgAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGgAbABsAGwAbABoAGwAbABoAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbABsAGwAbAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAHgAeAFAAGgAeAB0AHgBQAB4AGgAeAB4AHgAeAB4AHgAeAB4AHgBPAB4AUAAbAB4AHgBQAFAAUABQAFAAHgAeAB4AHQAdAB4AUAAeAFAAHgBQAB4AUABPAFAAUAAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAHgBQAFAAUABQAE8ATwBQAFAAUABQAFAATwBQAFAATwBQAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAUABQAFAATwBPAE8ATwBPAE8ATwBPAE8ATwBQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABPAB4AHgArACsAKwArAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHQAdAB4AHgAeAB0AHQAeAB4AHQAeAB4AHgAdAB4AHQAbABsAHgAdAB4AHgAeAB4AHQAeAB4AHQAdAB0AHQAeAB4AHQAeAB0AHgAdAB0AHQAdAB0AHQAeAB0AHgAeAB4AHgAeAB0AHQAdAB0AHgAeAB4AHgAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB4AHgAeAB0AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAeAB0AHQAdAB0AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAdAB4AHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAWABEAHgAeAB4AHgAeAB4AHQAeAB4AHgAeAB4AHgAeACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAWABEAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAFAAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAeAB4AHQAdAB0AHQAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB0AHQAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB0AHQAeAB4AHQAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AHQAdAB0AHgAeAB0AHgAeAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlAB4AHQAdAB4AHgAdAB4AHgAeAB4AHQAdAB4AHgAeAB4AJQAlAB0AHQAlAB4AJQAlACUAIAAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAeAB4AHgAeAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHgAdAB0AHQAeAB0AJQAdAB0AHgAdAB0AHgAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHQAdAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAdAB0AHQAdACUAHgAlACUAJQAdACUAJQAdAB0AHQAlACUAHQAdACUAHQAdACUAJQAlAB4AHQAeAB4AHgAeAB0AHQAlAB0AHQAdAB0AHQAdACUAJQAlACUAJQAdACUAJQAgACUAHQAdACUAJQAlACUAJQAlACUAJQAeAB4AHgAlACUAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB0AHgAeAB4AFwAXABcAFwAXABcAHgATABMAJQAeAB4AHgAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARABYAEQAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAWABEAFgARABYAEQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAWABEAFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AFgARAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAdAB0AHQAdAB0AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAFAAUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAEAAQABAAeAB4AKwArACsAKwArABMADQANAA0AUAATAA0AUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUAANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAA0ADQANAA0ADQANAA0ADQAeAA0AFgANAB4AHgAXABcAHgAeABcAFwAWABEAFgARABYAEQAWABEADQANAA0ADQATAFAADQANAB4ADQANAB4AHgAeAB4AHgAMAAwADQANAA0AHgANAA0AFgANAA0ADQANAA0ADQANAA0AHgANAB4ADQANAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArACsAKwArACsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArAA0AEQARACUAJQBHAFcAVwAWABEAFgARABYAEQAWABEAFgARACUAJQAWABEAFgARABYAEQAWABEAFQAWABEAEQAlAFcAVwBXAFcAVwBXAFcAVwBXAAQABAAEAAQABAAEACUAVwBXAFcAVwA2ACUAJQBXAFcAVwBHAEcAJQAlACUAKwBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBRAFcAUQBXAFEAVwBXAFcAVwBXAFcAUQBXAFcAVwBXAFcAVwBRAFEAKwArAAQABAAVABUARwBHAFcAFQBRAFcAUQBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFEAVwBRAFcAUQBXAFcAVwBXAFcAVwBRAFcAVwBXAFcAVwBXAFEAUQBXAFcAVwBXABUAUQBHAEcAVwArACsAKwArACsAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwAlACUAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACsAKwArACsAKwArACsAKwArACsAKwArAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAUQBRAFEAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBPAE8ATwBPAE8ATwBPAE8AJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADQATAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABLAEsASwBLAEsASwBLAEsASwBLAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAABAAEAAQABAAeAAQABAAEAAQABAAEAAQABAAEAAQAHgBQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUABQAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAeAA0ADQANAA0ADQArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AUAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAB4AHgAeAB4AHgAeAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AUABQAFAAUABQAFAAUABQAFAAUABQAAQAUABQAFAABABQAFAAUABQAAQAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAeAB4AHgAeAAQAKwArACsAUABQAFAAUABQAFAAHgAeABoAHgArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAADgAOABMAEwArACsAKwArACsAKwArACsABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwANAA0ASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUAAeAB4AHgBQAA4AUABQAAQAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArAB4AWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYAFgAWABYACsAKwArAAQAHgAeAB4AHgAeAB4ADQANAA0AHgAeAB4AHgArAFAASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArAB4AHgBcAFwAXABcAFwAKgBcAFwAXABcAFwAXABcAFwAXABcAEsASwBLAEsASwBLAEsASwBLAEsAXABcAFwAXABcACsAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAFAAUABQAAQAUABQAFAAUABQAFAAUABQAAQABAArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAHgANAA0ADQBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKgAqACoAXAAqACoAKgBcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXAAqAFwAKgAqACoAXABcACoAKgBcAFwAXABcAFwAKgAqAFwAKgBcACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFwAXABcACoAKgBQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAA0ADQBQAFAAUAAEAAQAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQADQAEAAQAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAVABVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBUAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVAFUAVQBVACsAKwArACsAKwArACsAKwArACsAKwArAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAWQBZAFkAKwArACsAKwBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAWgBaAFoAKwArACsAKwAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYABgAGAAYAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAKwArACsAKwArAFYABABWAFYAVgBWAFYAVgBWAFYAVgBWAB4AVgBWAFYAVgBWAFYAVgBWAFYAVgBWAFYAVgArAFYAVgBWAFYAVgArAFYAKwBWAFYAKwBWAFYAKwBWAFYAVgBWAFYAVgBWAFYAVgBWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAEQAWAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAaAB4AKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAGAARABEAGAAYABMAEwAWABEAFAArACsAKwArACsAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACUAJQAlACUAJQAWABEAFgARABYAEQAWABEAFgARABYAEQAlACUAFgARACUAJQAlACUAJQAlACUAEQAlABEAKwAVABUAEwATACUAFgARABYAEQAWABEAJQAlACUAJQAlACUAJQAlACsAJQAbABoAJQArACsAKwArAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAcAKwATACUAJQAbABoAJQAlABYAEQAlACUAEQAlABEAJQBXAFcAVwBXAFcAVwBXAFcAVwBXABUAFQAlACUAJQATACUAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXABYAJQARACUAJQAlAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAWACUAEQAlABYAEQARABYAEQARABUAVwBRAFEAUQBRAFEAUQBRAFEAUQBRAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAEcARwArACsAVwBXAFcAVwBXAFcAKwArAFcAVwBXAFcAVwBXACsAKwBXAFcAVwBXAFcAVwArACsAVwBXAFcAKwArACsAGgAbACUAJQAlABsAGwArAB4AHgAeAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwAEAAQABAAQAB0AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsADQANAA0AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAA0AUABQAFAAUAArACsAKwArAFAAUABQAFAAUABQAFAAUAANAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwArAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwBQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwANAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAB4AUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAUABQAFAAUABQAAQABAAEACsABAAEACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAKwBQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAA0ADQANAA0ADQANAA0ADQAeACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAArACsAKwArAFAAUABQAFAAUAANAA0ADQANAA0ADQAUACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsADQANAA0ADQANAA0ADQBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAB4AHgAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArAAQABAANACsAKwBQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAB4AHgAeAB4AHgArACsAKwArACsAKwAEAAQABAAEAAQABAAEAA0ADQAeAB4AHgAeAB4AKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwAeACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEACsASwBLAEsASwBLAEsASwBLAEsASwANAA0ADQANAFAABAAEAFAAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAeAA4AUAArACsAKwArACsAKwArACsAKwAEAFAAUABQAFAADQANAB4ADQAEAAQABAAEAB4ABAAEAEsASwBLAEsASwBLAEsASwBLAEsAUAAOAFAADQANAA0AKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAANAA0AHgANAA0AHgAEACsAUABQAFAAUABQAFAAUAArAFAAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAA0AKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsABAAEAAQABAArAFAAUABQAFAAUABQAFAAUAArACsAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQACsABAAEAFAABAAEAAQABAAEAAQABAArACsABAAEACsAKwAEAAQABAArACsAUAArACsAKwArACsAKwAEACsAKwArACsAKwBQAFAAUABQAFAABAAEACsAKwAEAAQABAAEAAQABAAEACsAKwArAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsABAAEAAQABAAEAAQABABQAFAAUABQAA0ADQANAA0AHgBLAEsASwBLAEsASwBLAEsASwBLAA0ADQArAB4ABABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAFAAUAAeAFAAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABAAEAAQADgANAA0AEwATAB4AHgAeAA0ADQANAA0ADQANAA0ADQANAA0ADQANAA0ADQANAFAAUABQAFAABAAEACsAKwAEAA0ADQAeAFAAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAFAAKwArACsAKwArACsAKwBLAEsASwBLAEsASwBLAEsASwBLACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAXABcAFwAKwArACoAKgAqACoAKgAqACoAKgAqACoAKgAqACoAKgAqACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBcAFwADQANAA0AKgBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAKwArAFAAKwArAFAAUABQAFAAUABQAFAAUAArAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQAKwAEAAQAKwArAAQABAAEAAQAUAAEAFAABAAEAA0ADQANACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAArACsABAAEAAQABAAEAAQABABQAA4AUAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAABAAEAAQABAAEAAQABAAEAAQABABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAFAABAAEAAQABAAOAB4ADQANAA0ADQAOAB4ABAArACsAKwArACsAKwArACsAUAAEAAQABAAEAAQABAAEAAQABAAEAAQAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAA0ADQANAFAADgAOAA4ADQANACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAEAAQABAAEACsABAAEAAQABAAEAAQABAAEAFAADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAOABMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQACsAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAArACsAKwAEACsABAAEACsABAAEAAQABAAEAAQABABQAAQAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAUABQAFAAUABQAFAAKwBQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAUAArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAABAAEAAQABAAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAaABoAGgAaAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArAA0AUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsADQANAA0ADQANACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABIAEgAQwBDAEMAUABQAFAAUABDAFAAUABQAEgAQwBIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAASABDAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwAJAAkACQAJAAkACQAJABYAEQArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABIAEMAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwANAA0AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArAAQABAAEAAQABAANACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEAA0ADQANAB4AHgAeAB4AHgAeAFAAUABQAFAADQAeACsAKwArACsAKwArACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAANAA0AHgAeACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwAEAFAABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwAEAAQABAAEAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAARwBHABUARwAJACsAKwArACsAKwArACsAKwArACsAKwAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUQBRAFEAKwArACsAKwArACsAKwArACsAKwArACsAKwBRAFEAUQBRACsAKwArACsAKwArACsAKwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUAArACsAHgAEAAQADQAEAAQABAAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAAQABAAEAAQABAAeAB4AHgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAB4AHgAEAAQABAAEAAQABAAEAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4ABAAEAAQAHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwArACsAKwArACsAKwArACsAKwArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAKwArAFAAKwArAFAAUAArACsAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACsAUAArAFAAUABQAFAAUABQAFAAKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwBQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAHgAeAFAAUABQAFAAUAArAFAAKwArACsAUABQAFAAUABQAFAAUAArAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAB4AHgAeAB4AHgAeAB4AHgAeACsAKwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAEsASwBLAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAeAB4AHgAeAB4AHgAeAB4ABAAeAB4AHgAeAB4AHgAeAB4AHgAeAAQAHgAeAA0ADQANAA0AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQAKwAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArAAQABAAEAAQABAAEAAQAKwAEAAQAKwAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwAEAAQABAAEAAQABAAEAFAAUABQAFAAUABQAFAAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwBQAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArABsAUABQAFAAUABQACsAKwBQAFAAUABQAFAAUABQAFAAUAAEAAQABAAEAAQABAAEACsAKwArACsAKwArACsAKwArAB4AHgAeAB4ABAAEAAQABAAEAAQABABQACsAKwArACsASwBLAEsASwBLAEsASwBLAEsASwArACsAKwArABYAFgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAGgBQAFAAUAAaAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAeAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQACsAKwBQAFAAUABQACsAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUAArACsAKwArACsAKwBQACsAKwArACsAUAArAFAAKwBQACsAUABQAFAAKwBQAFAAKwBQACsAKwBQACsAUAArAFAAKwBQACsAUAArAFAAUAArAFAAKwArAFAAUABQAFAAKwBQAFAAUABQAFAAUABQACsAUABQAFAAUAArAFAAUABQAFAAKwBQACsAUABQAFAAUABQAFAAUABQAFAAUAArAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAArACsAKwArACsAUABQAFAAKwBQAFAAUABQAFAAKwBQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwAeAB4AKwArACsAKwArACsAKwArACsAKwArACsAKwArAE8ATwBPAE8ATwBPAE8ATwBPAE8ATwBPAE8AJQAlACUAHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHgAeAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB4AHgAeACUAJQAlAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAdAB0AHQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAKQApACkAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAlACUAJQAlACUAHgAlACUAJQAlACUAIAAgACAAJQAlACAAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACEAIQAhACEAIQAlACUAIAAgACUAJQAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlACUAIAAlACUAJQAlACAAIAAgACUAIAAgACAAJQAlACUAJQAlACUAJQAgACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAlAB4AJQAeACUAJQAlACUAJQAgACUAJQAlACUAHgAlAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAgACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACAAIAAgACAAIAAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeABcAFwAXABUAFQAVAB4AHgAeAB4AJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAgACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlACUAJQAeAB4AHgAeAB4AHgAeAB4AHgAeACUAJQAlACUAJQAlAB4AHgAeAB4AHgAeAB4AHgAlACUAJQAlACUAJQAlACUAHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAgACUAJQAgACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAJQAlACUAJQAlACUAIAAlACUAJQAlACUAJQAlACUAJQAgACAAIAAgACAAIAAgACAAIAAgACUAJQAgACAAIAAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACAAIAAlACAAIAAlACAAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAgACAAIAAlACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAJQAlAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AKwAeAB4AHgAeAB4AHgAeAB4AHgAeAB4AHgArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAEsASwBLAEsASwBLAEsASwBLAEsAKwArACsAKwArACsAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwArAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwAlACUAJQAlACUAJQAlACUAJQAlACUAVwBXACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQBXAFcAVwBXAFcAVwBXAFcAVwBXAFcAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAJQAlACUAKwAEACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArACsAKwArAA=='; - - var LETTER_NUMBER_MODIFIER = 50; - // Non-tailorable Line Breaking Classes - var BK = 1; // Cause a line break (after) - var CR$1 = 2; // Cause a line break (after), except between CR and LF - var LF$1 = 3; // Cause a line break (after) - var CM = 4; // Prohibit a line break between the character and the preceding character - var NL = 5; // Cause a line break (after) - var WJ = 7; // Prohibit line breaks before and after - var ZW = 8; // Provide a break opportunity - var GL = 9; // Prohibit line breaks before and after - var SP = 10; // Enable indirect line breaks - var ZWJ$1 = 11; // Prohibit line breaks within joiner sequences - // Break Opportunities - var B2 = 12; // Provide a line break opportunity before and after the character - var BA = 13; // Generally provide a line break opportunity after the character - var BB = 14; // Generally provide a line break opportunity before the character - var HY = 15; // Provide a line break opportunity after the character, except in numeric context - var CB = 16; // Provide a line break opportunity contingent on additional information - // Characters Prohibiting Certain Breaks - var CL = 17; // Prohibit line breaks before - var CP = 18; // Prohibit line breaks before - var EX = 19; // Prohibit line breaks before - var IN = 20; // Allow only indirect line breaks between pairs - var NS = 21; // Allow only indirect line breaks before - var OP = 22; // Prohibit line breaks after - var QU = 23; // Act like they are both opening and closing - // Numeric Context - var IS = 24; // Prevent breaks after any and before numeric - var NU = 25; // Form numeric expressions for line breaking purposes - var PO = 26; // Do not break following a numeric expression - var PR = 27; // Do not break in front of a numeric expression - var SY = 28; // Prevent a break before; and allow a break after - // Other Characters - var AI = 29; // Act like AL when the resolvedEAW is N; otherwise; act as ID - var AL = 30; // Are alphabetic characters or symbols that are used with alphabetic characters - var CJ = 31; // Treat as NS or ID for strict or normal breaking. - var EB = 32; // Do not break from following Emoji Modifier - var EM = 33; // Do not break from preceding Emoji Base - var H2 = 34; // Form Korean syllable blocks - var H3 = 35; // Form Korean syllable blocks - var HL = 36; // Do not break around a following hyphen; otherwise act as Alphabetic - var ID = 37; // Break before or after; except in some numeric context - var JL = 38; // Form Korean syllable blocks - var JV = 39; // Form Korean syllable blocks - var JT = 40; // Form Korean syllable blocks - var RI$1 = 41; // Keep pairs together. For pairs; break before and after other classes - var SA = 42; // Provide a line break opportunity contingent on additional, language-specific context analysis - var XX = 43; // Have as yet unknown line breaking behavior or unassigned code positions - var ea_OP = [0x2329, 0xff08]; - var BREAK_MANDATORY = '!'; - var BREAK_NOT_ALLOWED$1 = '×'; - var BREAK_ALLOWED$1 = '÷'; - var UnicodeTrie$1 = createTrieFromBase64$1(base64$1); - var ALPHABETICS = [AL, HL]; - var HARD_LINE_BREAKS = [BK, CR$1, LF$1, NL]; - var SPACE$1 = [SP, ZW]; - var PREFIX_POSTFIX = [PR, PO]; - var LINE_BREAKS = HARD_LINE_BREAKS.concat(SPACE$1); - var KOREAN_SYLLABLE_BLOCK = [JL, JV, JT, H2, H3]; - var HYPHEN = [HY, BA]; - var codePointsToCharacterClasses = function (codePoints, lineBreak) { - if (lineBreak === void 0) { lineBreak = 'strict'; } - var types = []; - var indices = []; - var categories = []; - codePoints.forEach(function (codePoint, index) { - var classType = UnicodeTrie$1.get(codePoint); - if (classType > LETTER_NUMBER_MODIFIER) { - categories.push(true); - classType -= LETTER_NUMBER_MODIFIER; - } - else { - categories.push(false); - } - if (['normal', 'auto', 'loose'].indexOf(lineBreak) !== -1) { - // U+2010, – U+2013, 〜 U+301C, ゠ U+30A0 - if ([0x2010, 0x2013, 0x301c, 0x30a0].indexOf(codePoint) !== -1) { - indices.push(index); - return types.push(CB); - } - } - if (classType === CM || classType === ZWJ$1) { - // LB10 Treat any remaining combining mark or ZWJ as AL. - if (index === 0) { - indices.push(index); - return types.push(AL); - } - // LB9 Do not break a combining character sequence; treat it as if it has the line breaking class of - // the base character in all of the following rules. Treat ZWJ as if it were CM. - var prev = types[index - 1]; - if (LINE_BREAKS.indexOf(prev) === -1) { - indices.push(indices[index - 1]); - return types.push(prev); - } - indices.push(index); - return types.push(AL); - } - indices.push(index); - if (classType === CJ) { - return types.push(lineBreak === 'strict' ? NS : ID); - } - if (classType === SA) { - return types.push(AL); - } - if (classType === AI) { - return types.push(AL); - } - // For supplementary characters, a useful default is to treat characters in the range 10000..1FFFD as AL - // and characters in the ranges 20000..2FFFD and 30000..3FFFD as ID, until the implementation can be revised - // to take into account the actual line breaking properties for these characters. - if (classType === XX) { - if ((codePoint >= 0x20000 && codePoint <= 0x2fffd) || (codePoint >= 0x30000 && codePoint <= 0x3fffd)) { - return types.push(ID); - } - else { - return types.push(AL); - } - } - types.push(classType); - }); - return [indices, types, categories]; - }; - var isAdjacentWithSpaceIgnored = function (a, b, currentIndex, classTypes) { - var current = classTypes[currentIndex]; - if (Array.isArray(a) ? a.indexOf(current) !== -1 : a === current) { - var i = currentIndex; - while (i <= classTypes.length) { - i++; - var next = classTypes[i]; - if (next === b) { - return true; - } - if (next !== SP) { - break; - } - } - } - if (current === SP) { - var i = currentIndex; - while (i > 0) { - i--; - var prev = classTypes[i]; - if (Array.isArray(a) ? a.indexOf(prev) !== -1 : a === prev) { - var n = currentIndex; - while (n <= classTypes.length) { - n++; - var next = classTypes[n]; - if (next === b) { - return true; - } - if (next !== SP) { - break; - } - } - } - if (prev !== SP) { - break; - } - } - } - return false; - }; - var previousNonSpaceClassType = function (currentIndex, classTypes) { - var i = currentIndex; - while (i >= 0) { - var type = classTypes[i]; - if (type === SP) { - i--; - } - else { - return type; - } - } - return 0; - }; - var _lineBreakAtIndex = function (codePoints, classTypes, indicies, index, forbiddenBreaks) { - if (indicies[index] === 0) { - return BREAK_NOT_ALLOWED$1; - } - var currentIndex = index - 1; - if (Array.isArray(forbiddenBreaks) && forbiddenBreaks[currentIndex] === true) { - return BREAK_NOT_ALLOWED$1; - } - var beforeIndex = currentIndex - 1; - var afterIndex = currentIndex + 1; - var current = classTypes[currentIndex]; - // LB4 Always break after hard line breaks. - // LB5 Treat CR followed by LF, as well as CR, LF, and NL as hard line breaks. - var before = beforeIndex >= 0 ? classTypes[beforeIndex] : 0; - var next = classTypes[afterIndex]; - if (current === CR$1 && next === LF$1) { - return BREAK_NOT_ALLOWED$1; - } - if (HARD_LINE_BREAKS.indexOf(current) !== -1) { - return BREAK_MANDATORY; - } - // LB6 Do not break before hard line breaks. - if (HARD_LINE_BREAKS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB7 Do not break before spaces or zero width space. - if (SPACE$1.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB8 Break before any character following a zero-width space, even if one or more spaces intervene. - if (previousNonSpaceClassType(currentIndex, classTypes) === ZW) { - return BREAK_ALLOWED$1; - } - // LB8a Do not break after a zero width joiner. - if (UnicodeTrie$1.get(codePoints[currentIndex]) === ZWJ$1) { - return BREAK_NOT_ALLOWED$1; - } - // zwj emojis - if ((current === EB || current === EM) && UnicodeTrie$1.get(codePoints[afterIndex]) === ZWJ$1) { - return BREAK_NOT_ALLOWED$1; - } - // LB11 Do not break before or after Word joiner and related characters. - if (current === WJ || next === WJ) { - return BREAK_NOT_ALLOWED$1; - } - // LB12 Do not break after NBSP and related characters. - if (current === GL) { - return BREAK_NOT_ALLOWED$1; - } - // LB12a Do not break before NBSP and related characters, except after spaces and hyphens. - if ([SP, BA, HY].indexOf(current) === -1 && next === GL) { - return BREAK_NOT_ALLOWED$1; - } - // LB13 Do not break before ‘]’ or ‘!’ or ‘;’ or ‘/’, even after spaces. - if ([CL, CP, EX, IS, SY].indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB14 Do not break after ‘[’, even after spaces. - if (previousNonSpaceClassType(currentIndex, classTypes) === OP) { - return BREAK_NOT_ALLOWED$1; - } - // LB15 Do not break within ‘”[’, even with intervening spaces. - if (isAdjacentWithSpaceIgnored(QU, OP, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB16 Do not break between closing punctuation and a nonstarter (lb=NS), even with intervening spaces. - if (isAdjacentWithSpaceIgnored([CL, CP], NS, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB17 Do not break within ‘——’, even with intervening spaces. - if (isAdjacentWithSpaceIgnored(B2, B2, currentIndex, classTypes)) { - return BREAK_NOT_ALLOWED$1; - } - // LB18 Break after spaces. - if (current === SP) { - return BREAK_ALLOWED$1; - } - // LB19 Do not break before or after quotation marks, such as ‘ ” ’. - if (current === QU || next === QU) { - return BREAK_NOT_ALLOWED$1; - } - // LB20 Break before and after unresolved CB. - if (next === CB || current === CB) { - return BREAK_ALLOWED$1; - } - // LB21 Do not break before hyphen-minus, other hyphens, fixed-width spaces, small kana, and other non-starters, or after acute accents. - if ([BA, HY, NS].indexOf(next) !== -1 || current === BB) { - return BREAK_NOT_ALLOWED$1; - } - // LB21a Don't break after Hebrew + Hyphen. - if (before === HL && HYPHEN.indexOf(current) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB21b Don’t break between Solidus and Hebrew letters. - if (current === SY && next === HL) { - return BREAK_NOT_ALLOWED$1; - } - // LB22 Do not break before ellipsis. - if (next === IN) { - return BREAK_NOT_ALLOWED$1; - } - // LB23 Do not break between digits and letters. - if ((ALPHABETICS.indexOf(next) !== -1 && current === NU) || (ALPHABETICS.indexOf(current) !== -1 && next === NU)) { - return BREAK_NOT_ALLOWED$1; - } - // LB23a Do not break between numeric prefixes and ideographs, or between ideographs and numeric postfixes. - if ((current === PR && [ID, EB, EM].indexOf(next) !== -1) || - ([ID, EB, EM].indexOf(current) !== -1 && next === PO)) { - return BREAK_NOT_ALLOWED$1; - } - // LB24 Do not break between numeric prefix/postfix and letters, or between letters and prefix/postfix. - if ((ALPHABETICS.indexOf(current) !== -1 && PREFIX_POSTFIX.indexOf(next) !== -1) || - (PREFIX_POSTFIX.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1)) { - return BREAK_NOT_ALLOWED$1; - } - // LB25 Do not break between the following pairs of classes relevant to numbers: - if ( - // (PR | PO) × ( OP | HY )? NU - ([PR, PO].indexOf(current) !== -1 && - (next === NU || ([OP, HY].indexOf(next) !== -1 && classTypes[afterIndex + 1] === NU))) || - // ( OP | HY ) × NU - ([OP, HY].indexOf(current) !== -1 && next === NU) || - // NU × (NU | SY | IS) - (current === NU && [NU, SY, IS].indexOf(next) !== -1)) { - return BREAK_NOT_ALLOWED$1; - } - // NU (NU | SY | IS)* × (NU | SY | IS | CL | CP) - if ([NU, SY, IS, CL, CP].indexOf(next) !== -1) { - var prevIndex = currentIndex; - while (prevIndex >= 0) { - var type = classTypes[prevIndex]; - if (type === NU) { - return BREAK_NOT_ALLOWED$1; - } - else if ([SY, IS].indexOf(type) !== -1) { - prevIndex--; - } - else { - break; - } - } - } - // NU (NU | SY | IS)* (CL | CP)? × (PO | PR)) - if ([PR, PO].indexOf(next) !== -1) { - var prevIndex = [CL, CP].indexOf(current) !== -1 ? beforeIndex : currentIndex; - while (prevIndex >= 0) { - var type = classTypes[prevIndex]; - if (type === NU) { - return BREAK_NOT_ALLOWED$1; - } - else if ([SY, IS].indexOf(type) !== -1) { - prevIndex--; - } - else { - break; - } - } - } - // LB26 Do not break a Korean syllable. - if ((JL === current && [JL, JV, H2, H3].indexOf(next) !== -1) || - ([JV, H2].indexOf(current) !== -1 && [JV, JT].indexOf(next) !== -1) || - ([JT, H3].indexOf(current) !== -1 && next === JT)) { - return BREAK_NOT_ALLOWED$1; - } - // LB27 Treat a Korean Syllable Block the same as ID. - if ((KOREAN_SYLLABLE_BLOCK.indexOf(current) !== -1 && [IN, PO].indexOf(next) !== -1) || - (KOREAN_SYLLABLE_BLOCK.indexOf(next) !== -1 && current === PR)) { - return BREAK_NOT_ALLOWED$1; - } - // LB28 Do not break between alphabetics (“at”). - if (ALPHABETICS.indexOf(current) !== -1 && ALPHABETICS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB29 Do not break between numeric punctuation and alphabetics (“e.g.”). - if (current === IS && ALPHABETICS.indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED$1; - } - // LB30 Do not break between letters, numbers, or ordinary symbols and opening or closing parentheses. - if ((ALPHABETICS.concat(NU).indexOf(current) !== -1 && - next === OP && - ea_OP.indexOf(codePoints[afterIndex]) === -1) || - (ALPHABETICS.concat(NU).indexOf(next) !== -1 && current === CP)) { - return BREAK_NOT_ALLOWED$1; - } - // LB30a Break between two regional indicator symbols if and only if there are an even number of regional - // indicators preceding the position of the break. - if (current === RI$1 && next === RI$1) { - var i = indicies[currentIndex]; - var count = 1; - while (i > 0) { - i--; - if (classTypes[i] === RI$1) { - count++; - } - else { - break; - } - } - if (count % 2 !== 0) { - return BREAK_NOT_ALLOWED$1; - } - } - // LB30b Do not break between an emoji base and an emoji modifier. - if (current === EB && next === EM) { - return BREAK_NOT_ALLOWED$1; - } - return BREAK_ALLOWED$1; - }; - var cssFormattedClasses = function (codePoints, options) { - if (!options) { - options = { lineBreak: 'normal', wordBreak: 'normal' }; - } - var _a = codePointsToCharacterClasses(codePoints, options.lineBreak), indicies = _a[0], classTypes = _a[1], isLetterNumber = _a[2]; - if (options.wordBreak === 'break-all' || options.wordBreak === 'break-word') { - classTypes = classTypes.map(function (type) { return ([NU, AL, SA].indexOf(type) !== -1 ? ID : type); }); - } - var forbiddenBreakpoints = options.wordBreak === 'keep-all' - ? isLetterNumber.map(function (letterNumber, i) { - return letterNumber && codePoints[i] >= 0x4e00 && codePoints[i] <= 0x9fff; - }) - : undefined; - return [indicies, classTypes, forbiddenBreakpoints]; - }; - var Break = /** @class */ (function () { - function Break(codePoints, lineBreak, start, end) { - this.codePoints = codePoints; - this.required = lineBreak === BREAK_MANDATORY; - this.start = start; - this.end = end; - } - Break.prototype.slice = function () { - return fromCodePoint$1.apply(void 0, this.codePoints.slice(this.start, this.end)); - }; - return Break; - }()); - var LineBreaker = function (str, options) { - var codePoints = toCodePoints$1(str); - var _a = cssFormattedClasses(codePoints, options), indicies = _a[0], classTypes = _a[1], forbiddenBreakpoints = _a[2]; - var length = codePoints.length; - var lastEnd = 0; - var nextIndex = 0; - return { - next: function () { - if (nextIndex >= length) { - return { done: true, value: null }; - } - var lineBreak = BREAK_NOT_ALLOWED$1; - while (nextIndex < length && - (lineBreak = _lineBreakAtIndex(codePoints, classTypes, indicies, ++nextIndex, forbiddenBreakpoints)) === - BREAK_NOT_ALLOWED$1) { } - if (lineBreak !== BREAK_NOT_ALLOWED$1 || nextIndex === length) { - var value = new Break(codePoints, lineBreak, lastEnd, nextIndex); - lastEnd = nextIndex; - return { value: value, done: false }; - } - return { done: true, value: null }; - }, - }; - }; - - // https://www.w3.org/TR/css-syntax-3 - var FLAG_UNRESTRICTED = 1 << 0; - var FLAG_ID = 1 << 1; - var FLAG_INTEGER = 1 << 2; - var FLAG_NUMBER = 1 << 3; - var LINE_FEED = 0x000a; - var SOLIDUS = 0x002f; - var REVERSE_SOLIDUS = 0x005c; - var CHARACTER_TABULATION = 0x0009; - var SPACE = 0x0020; - var QUOTATION_MARK = 0x0022; - var EQUALS_SIGN = 0x003d; - var NUMBER_SIGN = 0x0023; - var DOLLAR_SIGN = 0x0024; - var PERCENTAGE_SIGN = 0x0025; - var APOSTROPHE = 0x0027; - var LEFT_PARENTHESIS = 0x0028; - var RIGHT_PARENTHESIS = 0x0029; - var LOW_LINE = 0x005f; - var HYPHEN_MINUS = 0x002d; - var EXCLAMATION_MARK = 0x0021; - var LESS_THAN_SIGN = 0x003c; - var GREATER_THAN_SIGN = 0x003e; - var COMMERCIAL_AT = 0x0040; - var LEFT_SQUARE_BRACKET = 0x005b; - var RIGHT_SQUARE_BRACKET = 0x005d; - var CIRCUMFLEX_ACCENT = 0x003d; - var LEFT_CURLY_BRACKET = 0x007b; - var QUESTION_MARK = 0x003f; - var RIGHT_CURLY_BRACKET = 0x007d; - var VERTICAL_LINE = 0x007c; - var TILDE = 0x007e; - var CONTROL = 0x0080; - var REPLACEMENT_CHARACTER = 0xfffd; - var ASTERISK = 0x002a; - var PLUS_SIGN = 0x002b; - var COMMA = 0x002c; - var COLON = 0x003a; - var SEMICOLON = 0x003b; - var FULL_STOP = 0x002e; - var NULL = 0x0000; - var BACKSPACE = 0x0008; - var LINE_TABULATION = 0x000b; - var SHIFT_OUT = 0x000e; - var INFORMATION_SEPARATOR_ONE = 0x001f; - var DELETE = 0x007f; - var EOF = -1; - var ZERO = 0x0030; - var a = 0x0061; - var e = 0x0065; - var f = 0x0066; - var u = 0x0075; - var z = 0x007a; - var A = 0x0041; - var E = 0x0045; - var F = 0x0046; - var U = 0x0055; - var Z = 0x005a; - var isDigit = function (codePoint) { return codePoint >= ZERO && codePoint <= 0x0039; }; - var isSurrogateCodePoint = function (codePoint) { return codePoint >= 0xd800 && codePoint <= 0xdfff; }; - var isHex = function (codePoint) { - return isDigit(codePoint) || (codePoint >= A && codePoint <= F) || (codePoint >= a && codePoint <= f); - }; - var isLowerCaseLetter = function (codePoint) { return codePoint >= a && codePoint <= z; }; - var isUpperCaseLetter = function (codePoint) { return codePoint >= A && codePoint <= Z; }; - var isLetter = function (codePoint) { return isLowerCaseLetter(codePoint) || isUpperCaseLetter(codePoint); }; - var isNonASCIICodePoint = function (codePoint) { return codePoint >= CONTROL; }; - var isWhiteSpace = function (codePoint) { - return codePoint === LINE_FEED || codePoint === CHARACTER_TABULATION || codePoint === SPACE; - }; - var isNameStartCodePoint = function (codePoint) { - return isLetter(codePoint) || isNonASCIICodePoint(codePoint) || codePoint === LOW_LINE; - }; - var isNameCodePoint = function (codePoint) { - return isNameStartCodePoint(codePoint) || isDigit(codePoint) || codePoint === HYPHEN_MINUS; - }; - var isNonPrintableCodePoint = function (codePoint) { - return ((codePoint >= NULL && codePoint <= BACKSPACE) || - codePoint === LINE_TABULATION || - (codePoint >= SHIFT_OUT && codePoint <= INFORMATION_SEPARATOR_ONE) || - codePoint === DELETE); - }; - var isValidEscape = function (c1, c2) { - if (c1 !== REVERSE_SOLIDUS) { - return false; - } - return c2 !== LINE_FEED; - }; - var isIdentifierStart = function (c1, c2, c3) { - if (c1 === HYPHEN_MINUS) { - return isNameStartCodePoint(c2) || isValidEscape(c2, c3); - } - else if (isNameStartCodePoint(c1)) { - return true; - } - else if (c1 === REVERSE_SOLIDUS && isValidEscape(c1, c2)) { - return true; - } - return false; - }; - var isNumberStart = function (c1, c2, c3) { - if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) { - if (isDigit(c2)) { - return true; - } - return c2 === FULL_STOP && isDigit(c3); - } - if (c1 === FULL_STOP) { - return isDigit(c2); - } - return isDigit(c1); - }; - var stringToNumber = function (codePoints) { - var c = 0; - var sign = 1; - if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) { - if (codePoints[c] === HYPHEN_MINUS) { - sign = -1; - } - c++; - } - var integers = []; - while (isDigit(codePoints[c])) { - integers.push(codePoints[c++]); - } - var int = integers.length ? parseInt(fromCodePoint$1.apply(void 0, integers), 10) : 0; - if (codePoints[c] === FULL_STOP) { - c++; - } - var fraction = []; - while (isDigit(codePoints[c])) { - fraction.push(codePoints[c++]); - } - var fracd = fraction.length; - var frac = fracd ? parseInt(fromCodePoint$1.apply(void 0, fraction), 10) : 0; - if (codePoints[c] === E || codePoints[c] === e) { - c++; - } - var expsign = 1; - if (codePoints[c] === PLUS_SIGN || codePoints[c] === HYPHEN_MINUS) { - if (codePoints[c] === HYPHEN_MINUS) { - expsign = -1; - } - c++; - } - var exponent = []; - while (isDigit(codePoints[c])) { - exponent.push(codePoints[c++]); - } - var exp = exponent.length ? parseInt(fromCodePoint$1.apply(void 0, exponent), 10) : 0; - return sign * (int + frac * Math.pow(10, -fracd)) * Math.pow(10, expsign * exp); - }; - var LEFT_PARENTHESIS_TOKEN = { - type: 2 /* LEFT_PARENTHESIS_TOKEN */ - }; - var RIGHT_PARENTHESIS_TOKEN = { - type: 3 /* RIGHT_PARENTHESIS_TOKEN */ - }; - var COMMA_TOKEN = { type: 4 /* COMMA_TOKEN */ }; - var SUFFIX_MATCH_TOKEN = { type: 13 /* SUFFIX_MATCH_TOKEN */ }; - var PREFIX_MATCH_TOKEN = { type: 8 /* PREFIX_MATCH_TOKEN */ }; - var COLUMN_TOKEN = { type: 21 /* COLUMN_TOKEN */ }; - var DASH_MATCH_TOKEN = { type: 9 /* DASH_MATCH_TOKEN */ }; - var INCLUDE_MATCH_TOKEN = { type: 10 /* INCLUDE_MATCH_TOKEN */ }; - var LEFT_CURLY_BRACKET_TOKEN = { - type: 11 /* LEFT_CURLY_BRACKET_TOKEN */ - }; - var RIGHT_CURLY_BRACKET_TOKEN = { - type: 12 /* RIGHT_CURLY_BRACKET_TOKEN */ - }; - var SUBSTRING_MATCH_TOKEN = { type: 14 /* SUBSTRING_MATCH_TOKEN */ }; - var BAD_URL_TOKEN = { type: 23 /* BAD_URL_TOKEN */ }; - var BAD_STRING_TOKEN = { type: 1 /* BAD_STRING_TOKEN */ }; - var CDO_TOKEN = { type: 25 /* CDO_TOKEN */ }; - var CDC_TOKEN = { type: 24 /* CDC_TOKEN */ }; - var COLON_TOKEN = { type: 26 /* COLON_TOKEN */ }; - var SEMICOLON_TOKEN = { type: 27 /* SEMICOLON_TOKEN */ }; - var LEFT_SQUARE_BRACKET_TOKEN = { - type: 28 /* LEFT_SQUARE_BRACKET_TOKEN */ - }; - var RIGHT_SQUARE_BRACKET_TOKEN = { - type: 29 /* RIGHT_SQUARE_BRACKET_TOKEN */ - }; - var WHITESPACE_TOKEN = { type: 31 /* WHITESPACE_TOKEN */ }; - var EOF_TOKEN = { type: 32 /* EOF_TOKEN */ }; - var Tokenizer = /** @class */ (function () { - function Tokenizer() { - this._value = []; - } - Tokenizer.prototype.write = function (chunk) { - this._value = this._value.concat(toCodePoints$1(chunk)); - }; - Tokenizer.prototype.read = function () { - var tokens = []; - var token = this.consumeToken(); - while (token !== EOF_TOKEN) { - tokens.push(token); - token = this.consumeToken(); - } - return tokens; - }; - Tokenizer.prototype.consumeToken = function () { - var codePoint = this.consumeCodePoint(); - switch (codePoint) { - case QUOTATION_MARK: - return this.consumeStringToken(QUOTATION_MARK); - case NUMBER_SIGN: - var c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if (isNameCodePoint(c1) || isValidEscape(c2, c3)) { - var flags = isIdentifierStart(c1, c2, c3) ? FLAG_ID : FLAG_UNRESTRICTED; - var value = this.consumeName(); - return { type: 5 /* HASH_TOKEN */, value: value, flags: flags }; - } - break; - case DOLLAR_SIGN: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return SUFFIX_MATCH_TOKEN; - } - break; - case APOSTROPHE: - return this.consumeStringToken(APOSTROPHE); - case LEFT_PARENTHESIS: - return LEFT_PARENTHESIS_TOKEN; - case RIGHT_PARENTHESIS: - return RIGHT_PARENTHESIS_TOKEN; - case ASTERISK: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return SUBSTRING_MATCH_TOKEN; - } - break; - case PLUS_SIGN: - if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - break; - case COMMA: - return COMMA_TOKEN; - case HYPHEN_MINUS: - var e1 = codePoint; - var e2 = this.peekCodePoint(0); - var e3 = this.peekCodePoint(1); - if (isNumberStart(e1, e2, e3)) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - if (isIdentifierStart(e1, e2, e3)) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - if (e2 === HYPHEN_MINUS && e3 === GREATER_THAN_SIGN) { - this.consumeCodePoint(); - this.consumeCodePoint(); - return CDC_TOKEN; - } - break; - case FULL_STOP: - if (isNumberStart(codePoint, this.peekCodePoint(0), this.peekCodePoint(1))) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - break; - case SOLIDUS: - if (this.peekCodePoint(0) === ASTERISK) { - this.consumeCodePoint(); - while (true) { - var c = this.consumeCodePoint(); - if (c === ASTERISK) { - c = this.consumeCodePoint(); - if (c === SOLIDUS) { - return this.consumeToken(); - } - } - if (c === EOF) { - return this.consumeToken(); - } - } - } - break; - case COLON: - return COLON_TOKEN; - case SEMICOLON: - return SEMICOLON_TOKEN; - case LESS_THAN_SIGN: - if (this.peekCodePoint(0) === EXCLAMATION_MARK && - this.peekCodePoint(1) === HYPHEN_MINUS && - this.peekCodePoint(2) === HYPHEN_MINUS) { - this.consumeCodePoint(); - this.consumeCodePoint(); - return CDO_TOKEN; - } - break; - case COMMERCIAL_AT: - var a1 = this.peekCodePoint(0); - var a2 = this.peekCodePoint(1); - var a3 = this.peekCodePoint(2); - if (isIdentifierStart(a1, a2, a3)) { - var value = this.consumeName(); - return { type: 7 /* AT_KEYWORD_TOKEN */, value: value }; - } - break; - case LEFT_SQUARE_BRACKET: - return LEFT_SQUARE_BRACKET_TOKEN; - case REVERSE_SOLIDUS: - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - break; - case RIGHT_SQUARE_BRACKET: - return RIGHT_SQUARE_BRACKET_TOKEN; - case CIRCUMFLEX_ACCENT: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return PREFIX_MATCH_TOKEN; - } - break; - case LEFT_CURLY_BRACKET: - return LEFT_CURLY_BRACKET_TOKEN; - case RIGHT_CURLY_BRACKET: - return RIGHT_CURLY_BRACKET_TOKEN; - case u: - case U: - var u1 = this.peekCodePoint(0); - var u2 = this.peekCodePoint(1); - if (u1 === PLUS_SIGN && (isHex(u2) || u2 === QUESTION_MARK)) { - this.consumeCodePoint(); - this.consumeUnicodeRangeToken(); - } - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - case VERTICAL_LINE: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return DASH_MATCH_TOKEN; - } - if (this.peekCodePoint(0) === VERTICAL_LINE) { - this.consumeCodePoint(); - return COLUMN_TOKEN; - } - break; - case TILDE: - if (this.peekCodePoint(0) === EQUALS_SIGN) { - this.consumeCodePoint(); - return INCLUDE_MATCH_TOKEN; - } - break; - case EOF: - return EOF_TOKEN; - } - if (isWhiteSpace(codePoint)) { - this.consumeWhiteSpace(); - return WHITESPACE_TOKEN; - } - if (isDigit(codePoint)) { - this.reconsumeCodePoint(codePoint); - return this.consumeNumericToken(); - } - if (isNameStartCodePoint(codePoint)) { - this.reconsumeCodePoint(codePoint); - return this.consumeIdentLikeToken(); - } - return { type: 6 /* DELIM_TOKEN */, value: fromCodePoint$1(codePoint) }; - }; - Tokenizer.prototype.consumeCodePoint = function () { - var value = this._value.shift(); - return typeof value === 'undefined' ? -1 : value; - }; - Tokenizer.prototype.reconsumeCodePoint = function (codePoint) { - this._value.unshift(codePoint); - }; - Tokenizer.prototype.peekCodePoint = function (delta) { - if (delta >= this._value.length) { - return -1; - } - return this._value[delta]; - }; - Tokenizer.prototype.consumeUnicodeRangeToken = function () { - var digits = []; - var codePoint = this.consumeCodePoint(); - while (isHex(codePoint) && digits.length < 6) { - digits.push(codePoint); - codePoint = this.consumeCodePoint(); - } - var questionMarks = false; - while (codePoint === QUESTION_MARK && digits.length < 6) { - digits.push(codePoint); - codePoint = this.consumeCodePoint(); - questionMarks = true; - } - if (questionMarks) { - var start_1 = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? ZERO : digit); })), 16); - var end = parseInt(fromCodePoint$1.apply(void 0, digits.map(function (digit) { return (digit === QUESTION_MARK ? F : digit); })), 16); - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start_1, end: end }; - } - var start = parseInt(fromCodePoint$1.apply(void 0, digits), 16); - if (this.peekCodePoint(0) === HYPHEN_MINUS && isHex(this.peekCodePoint(1))) { - this.consumeCodePoint(); - codePoint = this.consumeCodePoint(); - var endDigits = []; - while (isHex(codePoint) && endDigits.length < 6) { - endDigits.push(codePoint); - codePoint = this.consumeCodePoint(); - } - var end = parseInt(fromCodePoint$1.apply(void 0, endDigits), 16); - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: end }; - } - else { - return { type: 30 /* UNICODE_RANGE_TOKEN */, start: start, end: start }; - } - }; - Tokenizer.prototype.consumeIdentLikeToken = function () { - var value = this.consumeName(); - if (value.toLowerCase() === 'url' && this.peekCodePoint(0) === LEFT_PARENTHESIS) { - this.consumeCodePoint(); - return this.consumeUrlToken(); - } - else if (this.peekCodePoint(0) === LEFT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 19 /* FUNCTION_TOKEN */, value: value }; - } - return { type: 20 /* IDENT_TOKEN */, value: value }; - }; - Tokenizer.prototype.consumeUrlToken = function () { - var value = []; - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF) { - return { type: 22 /* URL_TOKEN */, value: '' }; - } - var next = this.peekCodePoint(0); - if (next === APOSTROPHE || next === QUOTATION_MARK) { - var stringToken = this.consumeStringToken(this.consumeCodePoint()); - if (stringToken.type === 0 /* STRING_TOKEN */) { - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 22 /* URL_TOKEN */, value: stringToken.value }; - } - } - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - while (true) { - var codePoint = this.consumeCodePoint(); - if (codePoint === EOF || codePoint === RIGHT_PARENTHESIS) { - return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) }; - } - else if (isWhiteSpace(codePoint)) { - this.consumeWhiteSpace(); - if (this.peekCodePoint(0) === EOF || this.peekCodePoint(0) === RIGHT_PARENTHESIS) { - this.consumeCodePoint(); - return { type: 22 /* URL_TOKEN */, value: fromCodePoint$1.apply(void 0, value) }; - } - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - else if (codePoint === QUOTATION_MARK || - codePoint === APOSTROPHE || - codePoint === LEFT_PARENTHESIS || - isNonPrintableCodePoint(codePoint)) { - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - else if (codePoint === REVERSE_SOLIDUS) { - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - value.push(this.consumeEscapedCodePoint()); - } - else { - this.consumeBadUrlRemnants(); - return BAD_URL_TOKEN; - } - } - else { - value.push(codePoint); - } - } - }; - Tokenizer.prototype.consumeWhiteSpace = function () { - while (isWhiteSpace(this.peekCodePoint(0))) { - this.consumeCodePoint(); - } - }; - Tokenizer.prototype.consumeBadUrlRemnants = function () { - while (true) { - var codePoint = this.consumeCodePoint(); - if (codePoint === RIGHT_PARENTHESIS || codePoint === EOF) { - return; - } - if (isValidEscape(codePoint, this.peekCodePoint(0))) { - this.consumeEscapedCodePoint(); - } - } - }; - Tokenizer.prototype.consumeStringSlice = function (count) { - var SLICE_STACK_SIZE = 50000; - var value = ''; - while (count > 0) { - var amount = Math.min(SLICE_STACK_SIZE, count); - value += fromCodePoint$1.apply(void 0, this._value.splice(0, amount)); - count -= amount; - } - this._value.shift(); - return value; - }; - Tokenizer.prototype.consumeStringToken = function (endingCodePoint) { - var value = ''; - var i = 0; - do { - var codePoint = this._value[i]; - if (codePoint === EOF || codePoint === undefined || codePoint === endingCodePoint) { - value += this.consumeStringSlice(i); - return { type: 0 /* STRING_TOKEN */, value: value }; - } - if (codePoint === LINE_FEED) { - this._value.splice(0, i); - return BAD_STRING_TOKEN; - } - if (codePoint === REVERSE_SOLIDUS) { - var next = this._value[i + 1]; - if (next !== EOF && next !== undefined) { - if (next === LINE_FEED) { - value += this.consumeStringSlice(i); - i = -1; - this._value.shift(); - } - else if (isValidEscape(codePoint, next)) { - value += this.consumeStringSlice(i); - value += fromCodePoint$1(this.consumeEscapedCodePoint()); - i = -1; - } - } - } - i++; - } while (true); - }; - Tokenizer.prototype.consumeNumber = function () { - var repr = []; - var type = FLAG_INTEGER; - var c1 = this.peekCodePoint(0); - if (c1 === PLUS_SIGN || c1 === HYPHEN_MINUS) { - repr.push(this.consumeCodePoint()); - } - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - if (c1 === FULL_STOP && isDigit(c2)) { - repr.push(this.consumeCodePoint(), this.consumeCodePoint()); - type = FLAG_NUMBER; - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - } - c1 = this.peekCodePoint(0); - c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if ((c1 === E || c1 === e) && (((c2 === PLUS_SIGN || c2 === HYPHEN_MINUS) && isDigit(c3)) || isDigit(c2))) { - repr.push(this.consumeCodePoint(), this.consumeCodePoint()); - type = FLAG_NUMBER; - while (isDigit(this.peekCodePoint(0))) { - repr.push(this.consumeCodePoint()); - } - } - return [stringToNumber(repr), type]; - }; - Tokenizer.prototype.consumeNumericToken = function () { - var _a = this.consumeNumber(), number = _a[0], flags = _a[1]; - var c1 = this.peekCodePoint(0); - var c2 = this.peekCodePoint(1); - var c3 = this.peekCodePoint(2); - if (isIdentifierStart(c1, c2, c3)) { - var unit = this.consumeName(); - return { type: 15 /* DIMENSION_TOKEN */, number: number, flags: flags, unit: unit }; - } - if (c1 === PERCENTAGE_SIGN) { - this.consumeCodePoint(); - return { type: 16 /* PERCENTAGE_TOKEN */, number: number, flags: flags }; - } - return { type: 17 /* NUMBER_TOKEN */, number: number, flags: flags }; - }; - Tokenizer.prototype.consumeEscapedCodePoint = function () { - var codePoint = this.consumeCodePoint(); - if (isHex(codePoint)) { - var hex = fromCodePoint$1(codePoint); - while (isHex(this.peekCodePoint(0)) && hex.length < 6) { - hex += fromCodePoint$1(this.consumeCodePoint()); - } - if (isWhiteSpace(this.peekCodePoint(0))) { - this.consumeCodePoint(); - } - var hexCodePoint = parseInt(hex, 16); - if (hexCodePoint === 0 || isSurrogateCodePoint(hexCodePoint) || hexCodePoint > 0x10ffff) { - return REPLACEMENT_CHARACTER; - } - return hexCodePoint; - } - if (codePoint === EOF) { - return REPLACEMENT_CHARACTER; - } - return codePoint; - }; - Tokenizer.prototype.consumeName = function () { - var result = ''; - while (true) { - var codePoint = this.consumeCodePoint(); - if (isNameCodePoint(codePoint)) { - result += fromCodePoint$1(codePoint); - } - else if (isValidEscape(codePoint, this.peekCodePoint(0))) { - result += fromCodePoint$1(this.consumeEscapedCodePoint()); - } - else { - this.reconsumeCodePoint(codePoint); - return result; - } - } - }; - return Tokenizer; - }()); - - var Parser = /** @class */ (function () { - function Parser(tokens) { - this._tokens = tokens; - } - Parser.create = function (value) { - var tokenizer = new Tokenizer(); - tokenizer.write(value); - return new Parser(tokenizer.read()); - }; - Parser.parseValue = function (value) { - return Parser.create(value).parseComponentValue(); - }; - Parser.parseValues = function (value) { - return Parser.create(value).parseComponentValues(); - }; - Parser.prototype.parseComponentValue = function () { - var token = this.consumeToken(); - while (token.type === 31 /* WHITESPACE_TOKEN */) { - token = this.consumeToken(); - } - if (token.type === 32 /* EOF_TOKEN */) { - throw new SyntaxError("Error parsing CSS component value, unexpected EOF"); - } - this.reconsumeToken(token); - var value = this.consumeComponentValue(); - do { - token = this.consumeToken(); - } while (token.type === 31 /* WHITESPACE_TOKEN */); - if (token.type === 32 /* EOF_TOKEN */) { - return value; - } - throw new SyntaxError("Error parsing CSS component value, multiple values found when expecting only one"); - }; - Parser.prototype.parseComponentValues = function () { - var values = []; - while (true) { - var value = this.consumeComponentValue(); - if (value.type === 32 /* EOF_TOKEN */) { - return values; - } - values.push(value); - values.push(); - } - }; - Parser.prototype.consumeComponentValue = function () { - var token = this.consumeToken(); - switch (token.type) { - case 11 /* LEFT_CURLY_BRACKET_TOKEN */: - case 28 /* LEFT_SQUARE_BRACKET_TOKEN */: - case 2 /* LEFT_PARENTHESIS_TOKEN */: - return this.consumeSimpleBlock(token.type); - case 19 /* FUNCTION_TOKEN */: - return this.consumeFunction(token); - } - return token; - }; - Parser.prototype.consumeSimpleBlock = function (type) { - var block = { type: type, values: [] }; - var token = this.consumeToken(); - while (true) { - if (token.type === 32 /* EOF_TOKEN */ || isEndingTokenFor(token, type)) { - return block; - } - this.reconsumeToken(token); - block.values.push(this.consumeComponentValue()); - token = this.consumeToken(); - } - }; - Parser.prototype.consumeFunction = function (functionToken) { - var cssFunction = { - name: functionToken.value, - values: [], - type: 18 /* FUNCTION */ - }; - while (true) { - var token = this.consumeToken(); - if (token.type === 32 /* EOF_TOKEN */ || token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */) { - return cssFunction; - } - this.reconsumeToken(token); - cssFunction.values.push(this.consumeComponentValue()); - } - }; - Parser.prototype.consumeToken = function () { - var token = this._tokens.shift(); - return typeof token === 'undefined' ? EOF_TOKEN : token; - }; - Parser.prototype.reconsumeToken = function (token) { - this._tokens.unshift(token); - }; - return Parser; - }()); - var isDimensionToken = function (token) { return token.type === 15 /* DIMENSION_TOKEN */; }; - var isNumberToken = function (token) { return token.type === 17 /* NUMBER_TOKEN */; }; - var isIdentToken = function (token) { return token.type === 20 /* IDENT_TOKEN */; }; - var isStringToken = function (token) { return token.type === 0 /* STRING_TOKEN */; }; - var isIdentWithValue = function (token, value) { - return isIdentToken(token) && token.value === value; - }; - var nonWhiteSpace = function (token) { return token.type !== 31 /* WHITESPACE_TOKEN */; }; - var nonFunctionArgSeparator = function (token) { - return token.type !== 31 /* WHITESPACE_TOKEN */ && token.type !== 4 /* COMMA_TOKEN */; - }; - var parseFunctionArgs = function (tokens) { - var args = []; - var arg = []; - tokens.forEach(function (token) { - if (token.type === 4 /* COMMA_TOKEN */) { - if (arg.length === 0) { - throw new Error("Error parsing function args, zero tokens for arg"); - } - args.push(arg); - arg = []; - return; - } - if (token.type !== 31 /* WHITESPACE_TOKEN */) { - arg.push(token); - } - }); - if (arg.length) { - args.push(arg); - } - return args; - }; - var isEndingTokenFor = function (token, type) { - if (type === 11 /* LEFT_CURLY_BRACKET_TOKEN */ && token.type === 12 /* RIGHT_CURLY_BRACKET_TOKEN */) { - return true; - } - if (type === 28 /* LEFT_SQUARE_BRACKET_TOKEN */ && token.type === 29 /* RIGHT_SQUARE_BRACKET_TOKEN */) { - return true; - } - return type === 2 /* LEFT_PARENTHESIS_TOKEN */ && token.type === 3 /* RIGHT_PARENTHESIS_TOKEN */; - }; - - var isLength = function (token) { - return token.type === 17 /* NUMBER_TOKEN */ || token.type === 15 /* DIMENSION_TOKEN */; - }; - - var isLengthPercentage = function (token) { - return token.type === 16 /* PERCENTAGE_TOKEN */ || isLength(token); - }; - var parseLengthPercentageTuple = function (tokens) { - return tokens.length > 1 ? [tokens[0], tokens[1]] : [tokens[0]]; - }; - var ZERO_LENGTH = { - type: 17 /* NUMBER_TOKEN */, - number: 0, - flags: FLAG_INTEGER - }; - var FIFTY_PERCENT = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 50, - flags: FLAG_INTEGER - }; - var HUNDRED_PERCENT = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 100, - flags: FLAG_INTEGER - }; - var getAbsoluteValueForTuple = function (tuple, width, height) { - var x = tuple[0], y = tuple[1]; - return [getAbsoluteValue(x, width), getAbsoluteValue(typeof y !== 'undefined' ? y : x, height)]; - }; - var getAbsoluteValue = function (token, parent) { - if (token.type === 16 /* PERCENTAGE_TOKEN */) { - return (token.number / 100) * parent; - } - if (isDimensionToken(token)) { - switch (token.unit) { - case 'rem': - case 'em': - return 16 * token.number; // TODO use correct font-size - case 'px': - default: - return token.number; - } - } - return token.number; - }; - - var DEG = 'deg'; - var GRAD = 'grad'; - var RAD = 'rad'; - var TURN = 'turn'; - var angle = { - name: 'angle', - parse: function (_context, value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - switch (value.unit) { - case DEG: - return (Math.PI * value.number) / 180; - case GRAD: - return (Math.PI / 200) * value.number; - case RAD: - return value.number; - case TURN: - return Math.PI * 2 * value.number; - } - } - throw new Error("Unsupported angle type"); - } - }; - var isAngle = function (value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - if (value.unit === DEG || value.unit === GRAD || value.unit === RAD || value.unit === TURN) { - return true; - } - } - return false; - }; - var parseNamedSide = function (tokens) { - var sideOrCorner = tokens - .filter(isIdentToken) - .map(function (ident) { return ident.value; }) - .join(' '); - switch (sideOrCorner) { - case 'to bottom right': - case 'to right bottom': - case 'left top': - case 'top left': - return [ZERO_LENGTH, ZERO_LENGTH]; - case 'to top': - case 'bottom': - return deg(0); - case 'to bottom left': - case 'to left bottom': - case 'right top': - case 'top right': - return [ZERO_LENGTH, HUNDRED_PERCENT]; - case 'to right': - case 'left': - return deg(90); - case 'to top left': - case 'to left top': - case 'right bottom': - case 'bottom right': - return [HUNDRED_PERCENT, HUNDRED_PERCENT]; - case 'to bottom': - case 'top': - return deg(180); - case 'to top right': - case 'to right top': - case 'left bottom': - case 'bottom left': - return [HUNDRED_PERCENT, ZERO_LENGTH]; - case 'to left': - case 'right': - return deg(270); - } - return 0; - }; - var deg = function (deg) { return (Math.PI * deg) / 180; }; - - var color$1 = { - name: 'color', - parse: function (context, value) { - if (value.type === 18 /* FUNCTION */) { - var colorFunction = SUPPORTED_COLOR_FUNCTIONS[value.name]; - if (typeof colorFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported color function \"" + value.name + "\""); - } - return colorFunction(context, value.values); - } - if (value.type === 5 /* HASH_TOKEN */) { - if (value.value.length === 3) { - var r = value.value.substring(0, 1); - var g = value.value.substring(1, 2); - var b = value.value.substring(2, 3); - return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), 1); - } - if (value.value.length === 4) { - var r = value.value.substring(0, 1); - var g = value.value.substring(1, 2); - var b = value.value.substring(2, 3); - var a = value.value.substring(3, 4); - return pack(parseInt(r + r, 16), parseInt(g + g, 16), parseInt(b + b, 16), parseInt(a + a, 16) / 255); - } - if (value.value.length === 6) { - var r = value.value.substring(0, 2); - var g = value.value.substring(2, 4); - var b = value.value.substring(4, 6); - return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), 1); - } - if (value.value.length === 8) { - var r = value.value.substring(0, 2); - var g = value.value.substring(2, 4); - var b = value.value.substring(4, 6); - var a = value.value.substring(6, 8); - return pack(parseInt(r, 16), parseInt(g, 16), parseInt(b, 16), parseInt(a, 16) / 255); - } - } - if (value.type === 20 /* IDENT_TOKEN */) { - var namedColor = COLORS[value.value.toUpperCase()]; - if (typeof namedColor !== 'undefined') { - return namedColor; - } - } - return COLORS.TRANSPARENT; - } - }; - var isTransparent = function (color) { return (0xff & color) === 0; }; - var asString = function (color) { - var alpha = 0xff & color; - var blue = 0xff & (color >> 8); - var green = 0xff & (color >> 16); - var red = 0xff & (color >> 24); - return alpha < 255 ? "rgba(" + red + "," + green + "," + blue + "," + alpha / 255 + ")" : "rgb(" + red + "," + green + "," + blue + ")"; - }; - var pack = function (r, g, b, a) { - return ((r << 24) | (g << 16) | (b << 8) | (Math.round(a * 255) << 0)) >>> 0; - }; - var getTokenColorValue = function (token, i) { - if (token.type === 17 /* NUMBER_TOKEN */) { - return token.number; - } - if (token.type === 16 /* PERCENTAGE_TOKEN */) { - var max = i === 3 ? 1 : 255; - return i === 3 ? (token.number / 100) * max : Math.round((token.number / 100) * max); - } - return 0; - }; - var rgb = function (_context, args) { - var tokens = args.filter(nonFunctionArgSeparator); - if (tokens.length === 3) { - var _a = tokens.map(getTokenColorValue), r = _a[0], g = _a[1], b = _a[2]; - return pack(r, g, b, 1); - } - if (tokens.length === 4) { - var _b = tokens.map(getTokenColorValue), r = _b[0], g = _b[1], b = _b[2], a = _b[3]; - return pack(r, g, b, a); - } - return 0; - }; - function hue2rgb(t1, t2, hue) { - if (hue < 0) { - hue += 1; - } - if (hue >= 1) { - hue -= 1; - } - if (hue < 1 / 6) { - return (t2 - t1) * hue * 6 + t1; - } - else if (hue < 1 / 2) { - return t2; - } - else if (hue < 2 / 3) { - return (t2 - t1) * 6 * (2 / 3 - hue) + t1; - } - else { - return t1; - } - } - var hsl = function (context, args) { - var tokens = args.filter(nonFunctionArgSeparator); - var hue = tokens[0], saturation = tokens[1], lightness = tokens[2], alpha = tokens[3]; - var h = (hue.type === 17 /* NUMBER_TOKEN */ ? deg(hue.number) : angle.parse(context, hue)) / (Math.PI * 2); - var s = isLengthPercentage(saturation) ? saturation.number / 100 : 0; - var l = isLengthPercentage(lightness) ? lightness.number / 100 : 0; - var a = typeof alpha !== 'undefined' && isLengthPercentage(alpha) ? getAbsoluteValue(alpha, 1) : 1; - if (s === 0) { - return pack(l * 255, l * 255, l * 255, 1); - } - var t2 = l <= 0.5 ? l * (s + 1) : l + s - l * s; - var t1 = l * 2 - t2; - var r = hue2rgb(t1, t2, h + 1 / 3); - var g = hue2rgb(t1, t2, h); - var b = hue2rgb(t1, t2, h - 1 / 3); - return pack(r * 255, g * 255, b * 255, a); - }; - var SUPPORTED_COLOR_FUNCTIONS = { - hsl: hsl, - hsla: hsl, - rgb: rgb, - rgba: rgb - }; - var parseColor = function (context, value) { - return color$1.parse(context, Parser.create(value).parseComponentValue()); - }; - var COLORS = { - ALICEBLUE: 0xf0f8ffff, - ANTIQUEWHITE: 0xfaebd7ff, - AQUA: 0x00ffffff, - AQUAMARINE: 0x7fffd4ff, - AZURE: 0xf0ffffff, - BEIGE: 0xf5f5dcff, - BISQUE: 0xffe4c4ff, - BLACK: 0x000000ff, - BLANCHEDALMOND: 0xffebcdff, - BLUE: 0x0000ffff, - BLUEVIOLET: 0x8a2be2ff, - BROWN: 0xa52a2aff, - BURLYWOOD: 0xdeb887ff, - CADETBLUE: 0x5f9ea0ff, - CHARTREUSE: 0x7fff00ff, - CHOCOLATE: 0xd2691eff, - CORAL: 0xff7f50ff, - CORNFLOWERBLUE: 0x6495edff, - CORNSILK: 0xfff8dcff, - CRIMSON: 0xdc143cff, - CYAN: 0x00ffffff, - DARKBLUE: 0x00008bff, - DARKCYAN: 0x008b8bff, - DARKGOLDENROD: 0xb886bbff, - DARKGRAY: 0xa9a9a9ff, - DARKGREEN: 0x006400ff, - DARKGREY: 0xa9a9a9ff, - DARKKHAKI: 0xbdb76bff, - DARKMAGENTA: 0x8b008bff, - DARKOLIVEGREEN: 0x556b2fff, - DARKORANGE: 0xff8c00ff, - DARKORCHID: 0x9932ccff, - DARKRED: 0x8b0000ff, - DARKSALMON: 0xe9967aff, - DARKSEAGREEN: 0x8fbc8fff, - DARKSLATEBLUE: 0x483d8bff, - DARKSLATEGRAY: 0x2f4f4fff, - DARKSLATEGREY: 0x2f4f4fff, - DARKTURQUOISE: 0x00ced1ff, - DARKVIOLET: 0x9400d3ff, - DEEPPINK: 0xff1493ff, - DEEPSKYBLUE: 0x00bfffff, - DIMGRAY: 0x696969ff, - DIMGREY: 0x696969ff, - DODGERBLUE: 0x1e90ffff, - FIREBRICK: 0xb22222ff, - FLORALWHITE: 0xfffaf0ff, - FORESTGREEN: 0x228b22ff, - FUCHSIA: 0xff00ffff, - GAINSBORO: 0xdcdcdcff, - GHOSTWHITE: 0xf8f8ffff, - GOLD: 0xffd700ff, - GOLDENROD: 0xdaa520ff, - GRAY: 0x808080ff, - GREEN: 0x008000ff, - GREENYELLOW: 0xadff2fff, - GREY: 0x808080ff, - HONEYDEW: 0xf0fff0ff, - HOTPINK: 0xff69b4ff, - INDIANRED: 0xcd5c5cff, - INDIGO: 0x4b0082ff, - IVORY: 0xfffff0ff, - KHAKI: 0xf0e68cff, - LAVENDER: 0xe6e6faff, - LAVENDERBLUSH: 0xfff0f5ff, - LAWNGREEN: 0x7cfc00ff, - LEMONCHIFFON: 0xfffacdff, - LIGHTBLUE: 0xadd8e6ff, - LIGHTCORAL: 0xf08080ff, - LIGHTCYAN: 0xe0ffffff, - LIGHTGOLDENRODYELLOW: 0xfafad2ff, - LIGHTGRAY: 0xd3d3d3ff, - LIGHTGREEN: 0x90ee90ff, - LIGHTGREY: 0xd3d3d3ff, - LIGHTPINK: 0xffb6c1ff, - LIGHTSALMON: 0xffa07aff, - LIGHTSEAGREEN: 0x20b2aaff, - LIGHTSKYBLUE: 0x87cefaff, - LIGHTSLATEGRAY: 0x778899ff, - LIGHTSLATEGREY: 0x778899ff, - LIGHTSTEELBLUE: 0xb0c4deff, - LIGHTYELLOW: 0xffffe0ff, - LIME: 0x00ff00ff, - LIMEGREEN: 0x32cd32ff, - LINEN: 0xfaf0e6ff, - MAGENTA: 0xff00ffff, - MAROON: 0x800000ff, - MEDIUMAQUAMARINE: 0x66cdaaff, - MEDIUMBLUE: 0x0000cdff, - MEDIUMORCHID: 0xba55d3ff, - MEDIUMPURPLE: 0x9370dbff, - MEDIUMSEAGREEN: 0x3cb371ff, - MEDIUMSLATEBLUE: 0x7b68eeff, - MEDIUMSPRINGGREEN: 0x00fa9aff, - MEDIUMTURQUOISE: 0x48d1ccff, - MEDIUMVIOLETRED: 0xc71585ff, - MIDNIGHTBLUE: 0x191970ff, - MINTCREAM: 0xf5fffaff, - MISTYROSE: 0xffe4e1ff, - MOCCASIN: 0xffe4b5ff, - NAVAJOWHITE: 0xffdeadff, - NAVY: 0x000080ff, - OLDLACE: 0xfdf5e6ff, - OLIVE: 0x808000ff, - OLIVEDRAB: 0x6b8e23ff, - ORANGE: 0xffa500ff, - ORANGERED: 0xff4500ff, - ORCHID: 0xda70d6ff, - PALEGOLDENROD: 0xeee8aaff, - PALEGREEN: 0x98fb98ff, - PALETURQUOISE: 0xafeeeeff, - PALEVIOLETRED: 0xdb7093ff, - PAPAYAWHIP: 0xffefd5ff, - PEACHPUFF: 0xffdab9ff, - PERU: 0xcd853fff, - PINK: 0xffc0cbff, - PLUM: 0xdda0ddff, - POWDERBLUE: 0xb0e0e6ff, - PURPLE: 0x800080ff, - REBECCAPURPLE: 0x663399ff, - RED: 0xff0000ff, - ROSYBROWN: 0xbc8f8fff, - ROYALBLUE: 0x4169e1ff, - SADDLEBROWN: 0x8b4513ff, - SALMON: 0xfa8072ff, - SANDYBROWN: 0xf4a460ff, - SEAGREEN: 0x2e8b57ff, - SEASHELL: 0xfff5eeff, - SIENNA: 0xa0522dff, - SILVER: 0xc0c0c0ff, - SKYBLUE: 0x87ceebff, - SLATEBLUE: 0x6a5acdff, - SLATEGRAY: 0x708090ff, - SLATEGREY: 0x708090ff, - SNOW: 0xfffafaff, - SPRINGGREEN: 0x00ff7fff, - STEELBLUE: 0x4682b4ff, - TAN: 0xd2b48cff, - TEAL: 0x008080ff, - THISTLE: 0xd8bfd8ff, - TOMATO: 0xff6347ff, - TRANSPARENT: 0x00000000, - TURQUOISE: 0x40e0d0ff, - VIOLET: 0xee82eeff, - WHEAT: 0xf5deb3ff, - WHITE: 0xffffffff, - WHITESMOKE: 0xf5f5f5ff, - YELLOW: 0xffff00ff, - YELLOWGREEN: 0x9acd32ff - }; - - var backgroundClip = { - name: 'background-clip', - initialValue: 'border-box', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.map(function (token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'padding-box': - return 1 /* PADDING_BOX */; - case 'content-box': - return 2 /* CONTENT_BOX */; - } - } - return 0 /* BORDER_BOX */; - }); - } - }; - - var backgroundColor = { - name: "background-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var parseColorStop = function (context, args) { - var color = color$1.parse(context, args[0]); - var stop = args[1]; - return stop && isLengthPercentage(stop) ? { color: color, stop: stop } : { color: color, stop: null }; - }; - var processColorStops = function (stops, lineLength) { - var first = stops[0]; - var last = stops[stops.length - 1]; - if (first.stop === null) { - first.stop = ZERO_LENGTH; - } - if (last.stop === null) { - last.stop = HUNDRED_PERCENT; - } - var processStops = []; - var previous = 0; - for (var i = 0; i < stops.length; i++) { - var stop_1 = stops[i].stop; - if (stop_1 !== null) { - var absoluteValue = getAbsoluteValue(stop_1, lineLength); - if (absoluteValue > previous) { - processStops.push(absoluteValue); - } - else { - processStops.push(previous); - } - previous = absoluteValue; - } - else { - processStops.push(null); - } - } - var gapBegin = null; - for (var i = 0; i < processStops.length; i++) { - var stop_2 = processStops[i]; - if (stop_2 === null) { - if (gapBegin === null) { - gapBegin = i; - } - } - else if (gapBegin !== null) { - var gapLength = i - gapBegin; - var beforeGap = processStops[gapBegin - 1]; - var gapValue = (stop_2 - beforeGap) / (gapLength + 1); - for (var g = 1; g <= gapLength; g++) { - processStops[gapBegin + g - 1] = gapValue * g; - } - gapBegin = null; - } - } - return stops.map(function (_a, i) { - var color = _a.color; - return { color: color, stop: Math.max(Math.min(1, processStops[i] / lineLength), 0) }; - }); - }; - var getAngleFromCorner = function (corner, width, height) { - var centerX = width / 2; - var centerY = height / 2; - var x = getAbsoluteValue(corner[0], width) - centerX; - var y = centerY - getAbsoluteValue(corner[1], height); - return (Math.atan2(y, x) + Math.PI * 2) % (Math.PI * 2); - }; - var calculateGradientDirection = function (angle, width, height) { - var radian = typeof angle === 'number' ? angle : getAngleFromCorner(angle, width, height); - var lineLength = Math.abs(width * Math.sin(radian)) + Math.abs(height * Math.cos(radian)); - var halfWidth = width / 2; - var halfHeight = height / 2; - var halfLineLength = lineLength / 2; - var yDiff = Math.sin(radian - Math.PI / 2) * halfLineLength; - var xDiff = Math.cos(radian - Math.PI / 2) * halfLineLength; - return [lineLength, halfWidth - xDiff, halfWidth + xDiff, halfHeight - yDiff, halfHeight + yDiff]; - }; - var distance = function (a, b) { return Math.sqrt(a * a + b * b); }; - var findCorner = function (width, height, x, y, closest) { - var corners = [ - [0, 0], - [0, height], - [width, 0], - [width, height] - ]; - return corners.reduce(function (stat, corner) { - var cx = corner[0], cy = corner[1]; - var d = distance(x - cx, y - cy); - if (closest ? d < stat.optimumDistance : d > stat.optimumDistance) { - return { - optimumCorner: corner, - optimumDistance: d - }; - } - return stat; - }, { - optimumDistance: closest ? Infinity : -Infinity, - optimumCorner: null - }).optimumCorner; - }; - var calculateRadius = function (gradient, x, y, width, height) { - var rx = 0; - var ry = 0; - switch (gradient.size) { - case 0 /* CLOSEST_SIDE */: - // The ending shape is sized so that that it exactly meets the side of the gradient box closest to the gradient’s center. - // If the shape is an ellipse, it exactly meets the closest side in each dimension. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.min(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - rx = Math.min(Math.abs(x), Math.abs(x - width)); - ry = Math.min(Math.abs(y), Math.abs(y - height)); - } - break; - case 2 /* CLOSEST_CORNER */: - // The ending shape is sized so that that it passes through the corner of the gradient box closest to the gradient’s center. - // If the shape is an ellipse, the ending shape is given the same aspect-ratio it would have if closest-side were specified. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.min(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - // Compute the ratio ry/rx (which is to be the same as for "closest-side") - var c = Math.min(Math.abs(y), Math.abs(y - height)) / Math.min(Math.abs(x), Math.abs(x - width)); - var _a = findCorner(width, height, x, y, true), cx = _a[0], cy = _a[1]; - rx = distance(cx - x, (cy - y) / c); - ry = c * rx; - } - break; - case 1 /* FARTHEST_SIDE */: - // Same as closest-side, except the ending shape is sized based on the farthest side(s) - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.max(Math.abs(x), Math.abs(x - width), Math.abs(y), Math.abs(y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - rx = Math.max(Math.abs(x), Math.abs(x - width)); - ry = Math.max(Math.abs(y), Math.abs(y - height)); - } - break; - case 3 /* FARTHEST_CORNER */: - // Same as closest-corner, except the ending shape is sized based on the farthest corner. - // If the shape is an ellipse, the ending shape is given the same aspect ratio it would have if farthest-side were specified. - if (gradient.shape === 0 /* CIRCLE */) { - rx = ry = Math.max(distance(x, y), distance(x, y - height), distance(x - width, y), distance(x - width, y - height)); - } - else if (gradient.shape === 1 /* ELLIPSE */) { - // Compute the ratio ry/rx (which is to be the same as for "farthest-side") - var c = Math.max(Math.abs(y), Math.abs(y - height)) / Math.max(Math.abs(x), Math.abs(x - width)); - var _b = findCorner(width, height, x, y, false), cx = _b[0], cy = _b[1]; - rx = distance(cx - x, (cy - y) / c); - ry = c * rx; - } - break; - } - if (Array.isArray(gradient.size)) { - rx = getAbsoluteValue(gradient.size[0], width); - ry = gradient.size.length === 2 ? getAbsoluteValue(gradient.size[1], height) : rx; - } - return [rx, ry]; - }; - - var linearGradient = function (context, tokens) { - var angle$1 = deg(180); - var stops = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - if (i === 0) { - var firstToken = arg[0]; - if (firstToken.type === 20 /* IDENT_TOKEN */ && firstToken.value === 'to') { - angle$1 = parseNamedSide(arg); - return; - } - else if (isAngle(firstToken)) { - angle$1 = angle.parse(context, firstToken); - return; - } - } - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - }); - return { angle: angle$1, stops: stops, type: 1 /* LINEAR_GRADIENT */ }; - }; - - var prefixLinearGradient = function (context, tokens) { - var angle$1 = deg(180); - var stops = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - if (i === 0) { - var firstToken = arg[0]; - if (firstToken.type === 20 /* IDENT_TOKEN */ && - ['top', 'left', 'right', 'bottom'].indexOf(firstToken.value) !== -1) { - angle$1 = parseNamedSide(arg); - return; - } - else if (isAngle(firstToken)) { - angle$1 = (angle.parse(context, firstToken) + deg(270)) % deg(360); - return; - } - } - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - }); - return { - angle: angle$1, - stops: stops, - type: 1 /* LINEAR_GRADIENT */ - }; - }; - - var webkitGradient = function (context, tokens) { - var angle = deg(180); - var stops = []; - var type = 1 /* LINEAR_GRADIENT */; - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var firstToken = arg[0]; - if (i === 0) { - if (isIdentToken(firstToken) && firstToken.value === 'linear') { - type = 1 /* LINEAR_GRADIENT */; - return; - } - else if (isIdentToken(firstToken) && firstToken.value === 'radial') { - type = 2 /* RADIAL_GRADIENT */; - return; - } - } - if (firstToken.type === 18 /* FUNCTION */) { - if (firstToken.name === 'from') { - var color = color$1.parse(context, firstToken.values[0]); - stops.push({ stop: ZERO_LENGTH, color: color }); - } - else if (firstToken.name === 'to') { - var color = color$1.parse(context, firstToken.values[0]); - stops.push({ stop: HUNDRED_PERCENT, color: color }); - } - else if (firstToken.name === 'color-stop') { - var values = firstToken.values.filter(nonFunctionArgSeparator); - if (values.length === 2) { - var color = color$1.parse(context, values[1]); - var stop_1 = values[0]; - if (isNumberToken(stop_1)) { - stops.push({ - stop: { type: 16 /* PERCENTAGE_TOKEN */, number: stop_1.number * 100, flags: stop_1.flags }, - color: color - }); - } - } - } - } - }); - return type === 1 /* LINEAR_GRADIENT */ - ? { - angle: (angle + deg(180)) % deg(360), - stops: stops, - type: type - } - : { size: size, shape: shape, stops: stops, position: position, type: type }; - }; - - var CLOSEST_SIDE = 'closest-side'; - var FARTHEST_SIDE = 'farthest-side'; - var CLOSEST_CORNER = 'closest-corner'; - var FARTHEST_CORNER = 'farthest-corner'; - var CIRCLE = 'circle'; - var ELLIPSE = 'ellipse'; - var COVER = 'cover'; - var CONTAIN = 'contain'; - var radialGradient = function (context, tokens) { - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var stops = []; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var isColorStop = true; - if (i === 0) { - var isAtPosition_1 = false; - isColorStop = arg.reduce(function (acc, token) { - if (isAtPosition_1) { - if (isIdentToken(token)) { - switch (token.value) { - case 'center': - position.push(FIFTY_PERCENT); - return acc; - case 'top': - case 'left': - position.push(ZERO_LENGTH); - return acc; - case 'right': - case 'bottom': - position.push(HUNDRED_PERCENT); - return acc; - } - } - else if (isLengthPercentage(token) || isLength(token)) { - position.push(token); - } - } - else if (isIdentToken(token)) { - switch (token.value) { - case CIRCLE: - shape = 0 /* CIRCLE */; - return false; - case ELLIPSE: - shape = 1 /* ELLIPSE */; - return false; - case 'at': - isAtPosition_1 = true; - return false; - case CLOSEST_SIDE: - size = 0 /* CLOSEST_SIDE */; - return false; - case COVER: - case FARTHEST_SIDE: - size = 1 /* FARTHEST_SIDE */; - return false; - case CONTAIN: - case CLOSEST_CORNER: - size = 2 /* CLOSEST_CORNER */; - return false; - case FARTHEST_CORNER: - size = 3 /* FARTHEST_CORNER */; - return false; - } - } - else if (isLength(token) || isLengthPercentage(token)) { - if (!Array.isArray(size)) { - size = []; - } - size.push(token); - return false; - } - return acc; - }, isColorStop); - } - if (isColorStop) { - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - } - }); - return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ }; - }; - - var prefixRadialGradient = function (context, tokens) { - var shape = 0 /* CIRCLE */; - var size = 3 /* FARTHEST_CORNER */; - var stops = []; - var position = []; - parseFunctionArgs(tokens).forEach(function (arg, i) { - var isColorStop = true; - if (i === 0) { - isColorStop = arg.reduce(function (acc, token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'center': - position.push(FIFTY_PERCENT); - return false; - case 'top': - case 'left': - position.push(ZERO_LENGTH); - return false; - case 'right': - case 'bottom': - position.push(HUNDRED_PERCENT); - return false; - } - } - else if (isLengthPercentage(token) || isLength(token)) { - position.push(token); - return false; - } - return acc; - }, isColorStop); - } - else if (i === 1) { - isColorStop = arg.reduce(function (acc, token) { - if (isIdentToken(token)) { - switch (token.value) { - case CIRCLE: - shape = 0 /* CIRCLE */; - return false; - case ELLIPSE: - shape = 1 /* ELLIPSE */; - return false; - case CONTAIN: - case CLOSEST_SIDE: - size = 0 /* CLOSEST_SIDE */; - return false; - case FARTHEST_SIDE: - size = 1 /* FARTHEST_SIDE */; - return false; - case CLOSEST_CORNER: - size = 2 /* CLOSEST_CORNER */; - return false; - case COVER: - case FARTHEST_CORNER: - size = 3 /* FARTHEST_CORNER */; - return false; - } - } - else if (isLength(token) || isLengthPercentage(token)) { - if (!Array.isArray(size)) { - size = []; - } - size.push(token); - return false; - } - return acc; - }, isColorStop); - } - if (isColorStop) { - var colorStop = parseColorStop(context, arg); - stops.push(colorStop); - } - }); - return { size: size, shape: shape, stops: stops, position: position, type: 2 /* RADIAL_GRADIENT */ }; - }; - - var isLinearGradient = function (background) { - return background.type === 1 /* LINEAR_GRADIENT */; - }; - var isRadialGradient = function (background) { - return background.type === 2 /* RADIAL_GRADIENT */; - }; - var image = { - name: 'image', - parse: function (context, value) { - if (value.type === 22 /* URL_TOKEN */) { - var image_1 = { url: value.value, type: 0 /* URL */ }; - context.cache.addImage(value.value); - return image_1; - } - if (value.type === 18 /* FUNCTION */) { - var imageFunction = SUPPORTED_IMAGE_FUNCTIONS[value.name]; - if (typeof imageFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported image function \"" + value.name + "\""); - } - return imageFunction(context, value.values); - } - throw new Error("Unsupported image type " + value.type); - } - }; - function isSupportedImage(value) { - return (!(value.type === 20 /* IDENT_TOKEN */ && value.value === 'none') && - (value.type !== 18 /* FUNCTION */ || !!SUPPORTED_IMAGE_FUNCTIONS[value.name])); - } - var SUPPORTED_IMAGE_FUNCTIONS = { - 'linear-gradient': linearGradient, - '-moz-linear-gradient': prefixLinearGradient, - '-ms-linear-gradient': prefixLinearGradient, - '-o-linear-gradient': prefixLinearGradient, - '-webkit-linear-gradient': prefixLinearGradient, - 'radial-gradient': radialGradient, - '-moz-radial-gradient': prefixRadialGradient, - '-ms-radial-gradient': prefixRadialGradient, - '-o-radial-gradient': prefixRadialGradient, - '-webkit-radial-gradient': prefixRadialGradient, - '-webkit-gradient': webkitGradient - }; - - var backgroundImage = { - name: 'background-image', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (context, tokens) { - if (tokens.length === 0) { - return []; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return []; - } - return tokens - .filter(function (value) { return nonFunctionArgSeparator(value) && isSupportedImage(value); }) - .map(function (value) { return image.parse(context, value); }); - } - }; - - var backgroundOrigin = { - name: 'background-origin', - initialValue: 'border-box', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.map(function (token) { - if (isIdentToken(token)) { - switch (token.value) { - case 'padding-box': - return 1 /* PADDING_BOX */; - case 'content-box': - return 2 /* CONTENT_BOX */; - } - } - return 0 /* BORDER_BOX */; - }); - } - }; - - var backgroundPosition = { - name: 'background-position', - initialValue: '0% 0%', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens) - .map(function (values) { return values.filter(isLengthPercentage); }) - .map(parseLengthPercentageTuple); - } - }; - - var backgroundRepeat = { - name: 'background-repeat', - initialValue: 'repeat', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens) - .map(function (values) { - return values - .filter(isIdentToken) - .map(function (token) { return token.value; }) - .join(' '); - }) - .map(parseBackgroundRepeat); - } - }; - var parseBackgroundRepeat = function (value) { - switch (value) { - case 'no-repeat': - return 1 /* NO_REPEAT */; - case 'repeat-x': - case 'repeat no-repeat': - return 2 /* REPEAT_X */; - case 'repeat-y': - case 'no-repeat repeat': - return 3 /* REPEAT_Y */; - case 'repeat': - default: - return 0 /* REPEAT */; - } - }; - - var BACKGROUND_SIZE; - (function (BACKGROUND_SIZE) { - BACKGROUND_SIZE["AUTO"] = "auto"; - BACKGROUND_SIZE["CONTAIN"] = "contain"; - BACKGROUND_SIZE["COVER"] = "cover"; - })(BACKGROUND_SIZE || (BACKGROUND_SIZE = {})); - var backgroundSize = { - name: 'background-size', - initialValue: '0', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseFunctionArgs(tokens).map(function (values) { return values.filter(isBackgroundSizeInfoToken); }); - } - }; - var isBackgroundSizeInfoToken = function (value) { - return isIdentToken(value) || isLengthPercentage(value); - }; - - var borderColorForSide = function (side) { return ({ - name: "border-" + side + "-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }); }; - var borderTopColor = borderColorForSide('top'); - var borderRightColor = borderColorForSide('right'); - var borderBottomColor = borderColorForSide('bottom'); - var borderLeftColor = borderColorForSide('left'); - - var borderRadiusForSide = function (side) { return ({ - name: "border-radius-" + side, - initialValue: '0 0', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return parseLengthPercentageTuple(tokens.filter(isLengthPercentage)); - } - }); }; - var borderTopLeftRadius = borderRadiusForSide('top-left'); - var borderTopRightRadius = borderRadiusForSide('top-right'); - var borderBottomRightRadius = borderRadiusForSide('bottom-right'); - var borderBottomLeftRadius = borderRadiusForSide('bottom-left'); - - var borderStyleForSide = function (side) { return ({ - name: "border-" + side + "-style", - initialValue: 'solid', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, style) { - switch (style) { - case 'none': - return 0 /* NONE */; - case 'dashed': - return 2 /* DASHED */; - case 'dotted': - return 3 /* DOTTED */; - case 'double': - return 4 /* DOUBLE */; - } - return 1 /* SOLID */; - } - }); }; - var borderTopStyle = borderStyleForSide('top'); - var borderRightStyle = borderStyleForSide('right'); - var borderBottomStyle = borderStyleForSide('bottom'); - var borderLeftStyle = borderStyleForSide('left'); - - var borderWidthForSide = function (side) { return ({ - name: "border-" + side + "-width", - initialValue: '0', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isDimensionToken(token)) { - return token.number; - } - return 0; - } - }); }; - var borderTopWidth = borderWidthForSide('top'); - var borderRightWidth = borderWidthForSide('right'); - var borderBottomWidth = borderWidthForSide('bottom'); - var borderLeftWidth = borderWidthForSide('left'); - - var color = { - name: "color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var direction = { - name: 'direction', - initialValue: 'ltr', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, direction) { - switch (direction) { - case 'rtl': - return 1 /* RTL */; - case 'ltr': - default: - return 0 /* LTR */; - } - } - }; - - var display = { - name: 'display', - initialValue: 'inline-block', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).reduce(function (bit, token) { - return bit | parseDisplayValue(token.value); - }, 0 /* NONE */); - } - }; - var parseDisplayValue = function (display) { - switch (display) { - case 'block': - case '-webkit-box': - return 2 /* BLOCK */; - case 'inline': - return 4 /* INLINE */; - case 'run-in': - return 8 /* RUN_IN */; - case 'flow': - return 16 /* FLOW */; - case 'flow-root': - return 32 /* FLOW_ROOT */; - case 'table': - return 64 /* TABLE */; - case 'flex': - case '-webkit-flex': - return 128 /* FLEX */; - case 'grid': - case '-ms-grid': - return 256 /* GRID */; - case 'ruby': - return 512 /* RUBY */; - case 'subgrid': - return 1024 /* SUBGRID */; - case 'list-item': - return 2048 /* LIST_ITEM */; - case 'table-row-group': - return 4096 /* TABLE_ROW_GROUP */; - case 'table-header-group': - return 8192 /* TABLE_HEADER_GROUP */; - case 'table-footer-group': - return 16384 /* TABLE_FOOTER_GROUP */; - case 'table-row': - return 32768 /* TABLE_ROW */; - case 'table-cell': - return 65536 /* TABLE_CELL */; - case 'table-column-group': - return 131072 /* TABLE_COLUMN_GROUP */; - case 'table-column': - return 262144 /* TABLE_COLUMN */; - case 'table-caption': - return 524288 /* TABLE_CAPTION */; - case 'ruby-base': - return 1048576 /* RUBY_BASE */; - case 'ruby-text': - return 2097152 /* RUBY_TEXT */; - case 'ruby-base-container': - return 4194304 /* RUBY_BASE_CONTAINER */; - case 'ruby-text-container': - return 8388608 /* RUBY_TEXT_CONTAINER */; - case 'contents': - return 16777216 /* CONTENTS */; - case 'inline-block': - return 33554432 /* INLINE_BLOCK */; - case 'inline-list-item': - return 67108864 /* INLINE_LIST_ITEM */; - case 'inline-table': - return 134217728 /* INLINE_TABLE */; - case 'inline-flex': - return 268435456 /* INLINE_FLEX */; - case 'inline-grid': - return 536870912 /* INLINE_GRID */; - } - return 0 /* NONE */; - }; - - var float = { - name: 'float', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, float) { - switch (float) { - case 'left': - return 1 /* LEFT */; - case 'right': - return 2 /* RIGHT */; - case 'inline-start': - return 3 /* INLINE_START */; - case 'inline-end': - return 4 /* INLINE_END */; - } - return 0 /* NONE */; - } - }; - - var letterSpacing = { - name: 'letter-spacing', - initialValue: '0', - prefix: false, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'normal') { - return 0; - } - if (token.type === 17 /* NUMBER_TOKEN */) { - return token.number; - } - if (token.type === 15 /* DIMENSION_TOKEN */) { - return token.number; - } - return 0; - } - }; - - var LINE_BREAK; - (function (LINE_BREAK) { - LINE_BREAK["NORMAL"] = "normal"; - LINE_BREAK["STRICT"] = "strict"; - })(LINE_BREAK || (LINE_BREAK = {})); - var lineBreak = { - name: 'line-break', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, lineBreak) { - switch (lineBreak) { - case 'strict': - return LINE_BREAK.STRICT; - case 'normal': - default: - return LINE_BREAK.NORMAL; - } - } - }; - - var lineHeight = { - name: 'line-height', - initialValue: 'normal', - prefix: false, - type: 4 /* TOKEN_VALUE */ - }; - var computeLineHeight = function (token, fontSize) { - if (isIdentToken(token) && token.value === 'normal') { - return 1.2 * fontSize; - } - else if (token.type === 17 /* NUMBER_TOKEN */) { - return fontSize * token.number; - } - else if (isLengthPercentage(token)) { - return getAbsoluteValue(token, fontSize); - } - return fontSize; - }; - - var listStyleImage = { - name: 'list-style-image', - initialValue: 'none', - type: 0 /* VALUE */, - prefix: false, - parse: function (context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') { - return null; - } - return image.parse(context, token); - } - }; - - var listStylePosition = { - name: 'list-style-position', - initialValue: 'outside', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, position) { - switch (position) { - case 'inside': - return 0 /* INSIDE */; - case 'outside': - default: - return 1 /* OUTSIDE */; - } - } - }; - - var listStyleType = { - name: 'list-style-type', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, type) { - switch (type) { - case 'disc': - return 0 /* DISC */; - case 'circle': - return 1 /* CIRCLE */; - case 'square': - return 2 /* SQUARE */; - case 'decimal': - return 3 /* DECIMAL */; - case 'cjk-decimal': - return 4 /* CJK_DECIMAL */; - case 'decimal-leading-zero': - return 5 /* DECIMAL_LEADING_ZERO */; - case 'lower-roman': - return 6 /* LOWER_ROMAN */; - case 'upper-roman': - return 7 /* UPPER_ROMAN */; - case 'lower-greek': - return 8 /* LOWER_GREEK */; - case 'lower-alpha': - return 9 /* LOWER_ALPHA */; - case 'upper-alpha': - return 10 /* UPPER_ALPHA */; - case 'arabic-indic': - return 11 /* ARABIC_INDIC */; - case 'armenian': - return 12 /* ARMENIAN */; - case 'bengali': - return 13 /* BENGALI */; - case 'cambodian': - return 14 /* CAMBODIAN */; - case 'cjk-earthly-branch': - return 15 /* CJK_EARTHLY_BRANCH */; - case 'cjk-heavenly-stem': - return 16 /* CJK_HEAVENLY_STEM */; - case 'cjk-ideographic': - return 17 /* CJK_IDEOGRAPHIC */; - case 'devanagari': - return 18 /* DEVANAGARI */; - case 'ethiopic-numeric': - return 19 /* ETHIOPIC_NUMERIC */; - case 'georgian': - return 20 /* GEORGIAN */; - case 'gujarati': - return 21 /* GUJARATI */; - case 'gurmukhi': - return 22 /* GURMUKHI */; - case 'hebrew': - return 22 /* HEBREW */; - case 'hiragana': - return 23 /* HIRAGANA */; - case 'hiragana-iroha': - return 24 /* HIRAGANA_IROHA */; - case 'japanese-formal': - return 25 /* JAPANESE_FORMAL */; - case 'japanese-informal': - return 26 /* JAPANESE_INFORMAL */; - case 'kannada': - return 27 /* KANNADA */; - case 'katakana': - return 28 /* KATAKANA */; - case 'katakana-iroha': - return 29 /* KATAKANA_IROHA */; - case 'khmer': - return 30 /* KHMER */; - case 'korean-hangul-formal': - return 31 /* KOREAN_HANGUL_FORMAL */; - case 'korean-hanja-formal': - return 32 /* KOREAN_HANJA_FORMAL */; - case 'korean-hanja-informal': - return 33 /* KOREAN_HANJA_INFORMAL */; - case 'lao': - return 34 /* LAO */; - case 'lower-armenian': - return 35 /* LOWER_ARMENIAN */; - case 'malayalam': - return 36 /* MALAYALAM */; - case 'mongolian': - return 37 /* MONGOLIAN */; - case 'myanmar': - return 38 /* MYANMAR */; - case 'oriya': - return 39 /* ORIYA */; - case 'persian': - return 40 /* PERSIAN */; - case 'simp-chinese-formal': - return 41 /* SIMP_CHINESE_FORMAL */; - case 'simp-chinese-informal': - return 42 /* SIMP_CHINESE_INFORMAL */; - case 'tamil': - return 43 /* TAMIL */; - case 'telugu': - return 44 /* TELUGU */; - case 'thai': - return 45 /* THAI */; - case 'tibetan': - return 46 /* TIBETAN */; - case 'trad-chinese-formal': - return 47 /* TRAD_CHINESE_FORMAL */; - case 'trad-chinese-informal': - return 48 /* TRAD_CHINESE_INFORMAL */; - case 'upper-armenian': - return 49 /* UPPER_ARMENIAN */; - case 'disclosure-open': - return 50 /* DISCLOSURE_OPEN */; - case 'disclosure-closed': - return 51 /* DISCLOSURE_CLOSED */; - case 'none': - default: - return -1 /* NONE */; - } - } - }; - - var marginForSide = function (side) { return ({ - name: "margin-" + side, - initialValue: '0', - prefix: false, - type: 4 /* TOKEN_VALUE */ - }); }; - var marginTop = marginForSide('top'); - var marginRight = marginForSide('right'); - var marginBottom = marginForSide('bottom'); - var marginLeft = marginForSide('left'); - - var overflow = { - name: 'overflow', - initialValue: 'visible', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).map(function (overflow) { - switch (overflow.value) { - case 'hidden': - return 1 /* HIDDEN */; - case 'scroll': - return 2 /* SCROLL */; - case 'clip': - return 3 /* CLIP */; - case 'auto': - return 4 /* AUTO */; - case 'visible': - default: - return 0 /* VISIBLE */; - } - }); - } - }; - - var overflowWrap = { - name: 'overflow-wrap', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, overflow) { - switch (overflow) { - case 'break-word': - return "break-word" /* BREAK_WORD */; - case 'normal': - default: - return "normal" /* NORMAL */; - } - } - }; - - var paddingForSide = function (side) { return ({ - name: "padding-" + side, - initialValue: '0', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'length-percentage' - }); }; - var paddingTop = paddingForSide('top'); - var paddingRight = paddingForSide('right'); - var paddingBottom = paddingForSide('bottom'); - var paddingLeft = paddingForSide('left'); - - var textAlign = { - name: 'text-align', - initialValue: 'left', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, textAlign) { - switch (textAlign) { - case 'right': - return 2 /* RIGHT */; - case 'center': - case 'justify': - return 1 /* CENTER */; - case 'left': - default: - return 0 /* LEFT */; - } - } - }; - - var position = { - name: 'position', - initialValue: 'static', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, position) { - switch (position) { - case 'relative': - return 1 /* RELATIVE */; - case 'absolute': - return 2 /* ABSOLUTE */; - case 'fixed': - return 3 /* FIXED */; - case 'sticky': - return 4 /* STICKY */; - } - return 0 /* STATIC */; - } - }; - - var textShadow = { - name: 'text-shadow', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (context, tokens) { - if (tokens.length === 1 && isIdentWithValue(tokens[0], 'none')) { - return []; - } - return parseFunctionArgs(tokens).map(function (values) { - var shadow = { - color: COLORS.TRANSPARENT, - offsetX: ZERO_LENGTH, - offsetY: ZERO_LENGTH, - blur: ZERO_LENGTH - }; - var c = 0; - for (var i = 0; i < values.length; i++) { - var token = values[i]; - if (isLength(token)) { - if (c === 0) { - shadow.offsetX = token; - } - else if (c === 1) { - shadow.offsetY = token; - } - else { - shadow.blur = token; - } - c++; - } - else { - shadow.color = color$1.parse(context, token); - } - } - return shadow; - }); - } - }; - - var textTransform = { - name: 'text-transform', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, textTransform) { - switch (textTransform) { - case 'uppercase': - return 2 /* UPPERCASE */; - case 'lowercase': - return 1 /* LOWERCASE */; - case 'capitalize': - return 3 /* CAPITALIZE */; - } - return 0 /* NONE */; - } - }; - - var transform$1 = { - name: 'transform', - initialValue: 'none', - prefix: true, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */ && token.value === 'none') { - return null; - } - if (token.type === 18 /* FUNCTION */) { - var transformFunction = SUPPORTED_TRANSFORM_FUNCTIONS[token.name]; - if (typeof transformFunction === 'undefined') { - throw new Error("Attempting to parse an unsupported transform function \"" + token.name + "\""); - } - return transformFunction(token.values); - } - return null; - } - }; - var matrix = function (args) { - var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; }); - return values.length === 6 ? values : null; - }; - // doesn't support 3D transforms at the moment - var matrix3d = function (args) { - var values = args.filter(function (arg) { return arg.type === 17 /* NUMBER_TOKEN */; }).map(function (arg) { return arg.number; }); - var a1 = values[0], b1 = values[1]; values[2]; values[3]; var a2 = values[4], b2 = values[5]; values[6]; values[7]; values[8]; values[9]; values[10]; values[11]; var a4 = values[12], b4 = values[13]; values[14]; values[15]; - return values.length === 16 ? [a1, b1, a2, b2, a4, b4] : null; - }; - var SUPPORTED_TRANSFORM_FUNCTIONS = { - matrix: matrix, - matrix3d: matrix3d - }; - - var DEFAULT_VALUE = { - type: 16 /* PERCENTAGE_TOKEN */, - number: 50, - flags: FLAG_INTEGER - }; - var DEFAULT = [DEFAULT_VALUE, DEFAULT_VALUE]; - var transformOrigin = { - name: 'transform-origin', - initialValue: '50% 50%', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var origins = tokens.filter(isLengthPercentage); - if (origins.length !== 2) { - return DEFAULT; - } - return [origins[0], origins[1]]; - } - }; - - var visibility = { - name: 'visible', - initialValue: 'none', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, visibility) { - switch (visibility) { - case 'hidden': - return 1 /* HIDDEN */; - case 'collapse': - return 2 /* COLLAPSE */; - case 'visible': - default: - return 0 /* VISIBLE */; - } - } - }; - - var WORD_BREAK; - (function (WORD_BREAK) { - WORD_BREAK["NORMAL"] = "normal"; - WORD_BREAK["BREAK_ALL"] = "break-all"; - WORD_BREAK["KEEP_ALL"] = "keep-all"; - })(WORD_BREAK || (WORD_BREAK = {})); - var wordBreak = { - name: 'word-break', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, wordBreak) { - switch (wordBreak) { - case 'break-all': - return WORD_BREAK.BREAK_ALL; - case 'keep-all': - return WORD_BREAK.KEEP_ALL; - case 'normal': - default: - return WORD_BREAK.NORMAL; - } - } - }; - - var zIndex = { - name: 'z-index', - initialValue: 'auto', - prefix: false, - type: 0 /* VALUE */, - parse: function (_context, token) { - if (token.type === 20 /* IDENT_TOKEN */) { - return { auto: true, order: 0 }; - } - if (isNumberToken(token)) { - return { auto: false, order: token.number }; - } - throw new Error("Invalid z-index number parsed"); - } - }; - - var time = { - name: 'time', - parse: function (_context, value) { - if (value.type === 15 /* DIMENSION_TOKEN */) { - switch (value.unit.toLowerCase()) { - case 's': - return 1000 * value.number; - case 'ms': - return value.number; - } - } - throw new Error("Unsupported time type"); - } - }; - - var opacity = { - name: 'opacity', - initialValue: '1', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isNumberToken(token)) { - return token.number; - } - return 1; - } - }; - - var textDecorationColor = { - name: "text-decoration-color", - initialValue: 'transparent', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var textDecorationLine = { - name: 'text-decoration-line', - initialValue: 'none', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - return tokens - .filter(isIdentToken) - .map(function (token) { - switch (token.value) { - case 'underline': - return 1 /* UNDERLINE */; - case 'overline': - return 2 /* OVERLINE */; - case 'line-through': - return 3 /* LINE_THROUGH */; - case 'none': - return 4 /* BLINK */; - } - return 0 /* NONE */; - }) - .filter(function (line) { return line !== 0 /* NONE */; }); - } - }; - - var fontFamily = { - name: "font-family", - initialValue: '', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var accumulator = []; - var results = []; - tokens.forEach(function (token) { - switch (token.type) { - case 20 /* IDENT_TOKEN */: - case 0 /* STRING_TOKEN */: - accumulator.push(token.value); - break; - case 17 /* NUMBER_TOKEN */: - accumulator.push(token.number.toString()); - break; - case 4 /* COMMA_TOKEN */: - results.push(accumulator.join(' ')); - accumulator.length = 0; - break; - } - }); - if (accumulator.length) { - results.push(accumulator.join(' ')); - } - return results.map(function (result) { return (result.indexOf(' ') === -1 ? result : "'" + result + "'"); }); - } - }; - - var fontSize = { - name: "font-size", - initialValue: '0', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'length' - }; - - var fontWeight = { - name: 'font-weight', - initialValue: 'normal', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isNumberToken(token)) { - return token.number; - } - if (isIdentToken(token)) { - switch (token.value) { - case 'bold': - return 700; - case 'normal': - default: - return 400; - } - } - return 400; - } - }; - - var fontVariant = { - name: 'font-variant', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - return tokens.filter(isIdentToken).map(function (token) { return token.value; }); - } - }; - - var fontStyle = { - name: 'font-style', - initialValue: 'normal', - prefix: false, - type: 2 /* IDENT_VALUE */, - parse: function (_context, overflow) { - switch (overflow) { - case 'oblique': - return "oblique" /* OBLIQUE */; - case 'italic': - return "italic" /* ITALIC */; - case 'normal': - default: - return "normal" /* NORMAL */; - } - } - }; - - var contains = function (bit, value) { return (bit & value) !== 0; }; - - var content = { - name: 'content', - initialValue: 'none', - type: 1 /* LIST */, - prefix: false, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return []; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return []; - } - return tokens; - } - }; - - var counterIncrement = { - name: 'counter-increment', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return null; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return null; - } - var increments = []; - var filtered = tokens.filter(nonWhiteSpace); - for (var i = 0; i < filtered.length; i++) { - var counter = filtered[i]; - var next = filtered[i + 1]; - if (counter.type === 20 /* IDENT_TOKEN */) { - var increment = next && isNumberToken(next) ? next.number : 1; - increments.push({ counter: counter.value, increment: increment }); - } - } - return increments; - } - }; - - var counterReset = { - name: 'counter-reset', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return []; - } - var resets = []; - var filtered = tokens.filter(nonWhiteSpace); - for (var i = 0; i < filtered.length; i++) { - var counter = filtered[i]; - var next = filtered[i + 1]; - if (isIdentToken(counter) && counter.value !== 'none') { - var reset = next && isNumberToken(next) ? next.number : 0; - resets.push({ counter: counter.value, reset: reset }); - } - } - return resets; - } - }; - - var duration = { - name: 'duration', - initialValue: '0s', - prefix: false, - type: 1 /* LIST */, - parse: function (context, tokens) { - return tokens.filter(isDimensionToken).map(function (token) { return time.parse(context, token); }); - } - }; - - var quotes = { - name: 'quotes', - initialValue: 'none', - prefix: true, - type: 1 /* LIST */, - parse: function (_context, tokens) { - if (tokens.length === 0) { - return null; - } - var first = tokens[0]; - if (first.type === 20 /* IDENT_TOKEN */ && first.value === 'none') { - return null; - } - var quotes = []; - var filtered = tokens.filter(isStringToken); - if (filtered.length % 2 !== 0) { - return null; - } - for (var i = 0; i < filtered.length; i += 2) { - var open_1 = filtered[i].value; - var close_1 = filtered[i + 1].value; - quotes.push({ open: open_1, close: close_1 }); - } - return quotes; - } - }; - var getQuote = function (quotes, depth, open) { - if (!quotes) { - return ''; - } - var quote = quotes[Math.min(depth, quotes.length - 1)]; - if (!quote) { - return ''; - } - return open ? quote.open : quote.close; - }; - - var paintOrder = { - name: 'paint-order', - initialValue: 'normal', - prefix: false, - type: 1 /* LIST */, - parse: function (_context, tokens) { - var DEFAULT_VALUE = [0 /* FILL */, 1 /* STROKE */, 2 /* MARKERS */]; - var layers = []; - tokens.filter(isIdentToken).forEach(function (token) { - switch (token.value) { - case 'stroke': - layers.push(1 /* STROKE */); - break; - case 'fill': - layers.push(0 /* FILL */); - break; - case 'markers': - layers.push(2 /* MARKERS */); - break; - } - }); - DEFAULT_VALUE.forEach(function (value) { - if (layers.indexOf(value) === -1) { - layers.push(value); - } - }); - return layers; - } - }; - - var webkitTextStrokeColor = { - name: "-webkit-text-stroke-color", - initialValue: 'currentcolor', - prefix: false, - type: 3 /* TYPE_VALUE */, - format: 'color' - }; - - var webkitTextStrokeWidth = { - name: "-webkit-text-stroke-width", - initialValue: '0', - type: 0 /* VALUE */, - prefix: false, - parse: function (_context, token) { - if (isDimensionToken(token)) { - return token.number; - } - return 0; - } - }; - - var CSSParsedDeclaration = /** @class */ (function () { - function CSSParsedDeclaration(context, declaration) { - var _a, _b; - this.animationDuration = parse(context, duration, declaration.animationDuration); - this.backgroundClip = parse(context, backgroundClip, declaration.backgroundClip); - this.backgroundColor = parse(context, backgroundColor, declaration.backgroundColor); - this.backgroundImage = parse(context, backgroundImage, declaration.backgroundImage); - this.backgroundOrigin = parse(context, backgroundOrigin, declaration.backgroundOrigin); - this.backgroundPosition = parse(context, backgroundPosition, declaration.backgroundPosition); - this.backgroundRepeat = parse(context, backgroundRepeat, declaration.backgroundRepeat); - this.backgroundSize = parse(context, backgroundSize, declaration.backgroundSize); - this.borderTopColor = parse(context, borderTopColor, declaration.borderTopColor); - this.borderRightColor = parse(context, borderRightColor, declaration.borderRightColor); - this.borderBottomColor = parse(context, borderBottomColor, declaration.borderBottomColor); - this.borderLeftColor = parse(context, borderLeftColor, declaration.borderLeftColor); - this.borderTopLeftRadius = parse(context, borderTopLeftRadius, declaration.borderTopLeftRadius); - this.borderTopRightRadius = parse(context, borderTopRightRadius, declaration.borderTopRightRadius); - this.borderBottomRightRadius = parse(context, borderBottomRightRadius, declaration.borderBottomRightRadius); - this.borderBottomLeftRadius = parse(context, borderBottomLeftRadius, declaration.borderBottomLeftRadius); - this.borderTopStyle = parse(context, borderTopStyle, declaration.borderTopStyle); - this.borderRightStyle = parse(context, borderRightStyle, declaration.borderRightStyle); - this.borderBottomStyle = parse(context, borderBottomStyle, declaration.borderBottomStyle); - this.borderLeftStyle = parse(context, borderLeftStyle, declaration.borderLeftStyle); - this.borderTopWidth = parse(context, borderTopWidth, declaration.borderTopWidth); - this.borderRightWidth = parse(context, borderRightWidth, declaration.borderRightWidth); - this.borderBottomWidth = parse(context, borderBottomWidth, declaration.borderBottomWidth); - this.borderLeftWidth = parse(context, borderLeftWidth, declaration.borderLeftWidth); - this.color = parse(context, color, declaration.color); - this.direction = parse(context, direction, declaration.direction); - this.display = parse(context, display, declaration.display); - this.float = parse(context, float, declaration.cssFloat); - this.fontFamily = parse(context, fontFamily, declaration.fontFamily); - this.fontSize = parse(context, fontSize, declaration.fontSize); - this.fontStyle = parse(context, fontStyle, declaration.fontStyle); - this.fontVariant = parse(context, fontVariant, declaration.fontVariant); - this.fontWeight = parse(context, fontWeight, declaration.fontWeight); - this.letterSpacing = parse(context, letterSpacing, declaration.letterSpacing); - this.lineBreak = parse(context, lineBreak, declaration.lineBreak); - this.lineHeight = parse(context, lineHeight, declaration.lineHeight); - this.listStyleImage = parse(context, listStyleImage, declaration.listStyleImage); - this.listStylePosition = parse(context, listStylePosition, declaration.listStylePosition); - this.listStyleType = parse(context, listStyleType, declaration.listStyleType); - this.marginTop = parse(context, marginTop, declaration.marginTop); - this.marginRight = parse(context, marginRight, declaration.marginRight); - this.marginBottom = parse(context, marginBottom, declaration.marginBottom); - this.marginLeft = parse(context, marginLeft, declaration.marginLeft); - this.opacity = parse(context, opacity, declaration.opacity); - var overflowTuple = parse(context, overflow, declaration.overflow); - this.overflowX = overflowTuple[0]; - this.overflowY = overflowTuple[overflowTuple.length > 1 ? 1 : 0]; - this.overflowWrap = parse(context, overflowWrap, declaration.overflowWrap); - this.paddingTop = parse(context, paddingTop, declaration.paddingTop); - this.paddingRight = parse(context, paddingRight, declaration.paddingRight); - this.paddingBottom = parse(context, paddingBottom, declaration.paddingBottom); - this.paddingLeft = parse(context, paddingLeft, declaration.paddingLeft); - this.paintOrder = parse(context, paintOrder, declaration.paintOrder); - this.position = parse(context, position, declaration.position); - this.textAlign = parse(context, textAlign, declaration.textAlign); - this.textDecorationColor = parse(context, textDecorationColor, (_a = declaration.textDecorationColor) !== null && _a !== void 0 ? _a : declaration.color); - this.textDecorationLine = parse(context, textDecorationLine, (_b = declaration.textDecorationLine) !== null && _b !== void 0 ? _b : declaration.textDecoration); - this.textShadow = parse(context, textShadow, declaration.textShadow); - this.textTransform = parse(context, textTransform, declaration.textTransform); - this.transform = parse(context, transform$1, declaration.transform); - this.transformOrigin = parse(context, transformOrigin, declaration.transformOrigin); - this.visibility = parse(context, visibility, declaration.visibility); - this.webkitTextStrokeColor = parse(context, webkitTextStrokeColor, declaration.webkitTextStrokeColor); - this.webkitTextStrokeWidth = parse(context, webkitTextStrokeWidth, declaration.webkitTextStrokeWidth); - this.wordBreak = parse(context, wordBreak, declaration.wordBreak); - this.zIndex = parse(context, zIndex, declaration.zIndex); - } - CSSParsedDeclaration.prototype.isVisible = function () { - return this.display > 0 && this.opacity > 0 && this.visibility === 0 /* VISIBLE */; - }; - CSSParsedDeclaration.prototype.isTransparent = function () { - return isTransparent(this.backgroundColor); - }; - CSSParsedDeclaration.prototype.isTransformed = function () { - return this.transform !== null; - }; - CSSParsedDeclaration.prototype.isPositioned = function () { - return this.position !== 0 /* STATIC */; - }; - CSSParsedDeclaration.prototype.isPositionedWithZIndex = function () { - return this.isPositioned() && !this.zIndex.auto; - }; - CSSParsedDeclaration.prototype.isFloating = function () { - return this.float !== 0 /* NONE */; - }; - CSSParsedDeclaration.prototype.isInlineLevel = function () { - return (contains(this.display, 4 /* INLINE */) || - contains(this.display, 33554432 /* INLINE_BLOCK */) || - contains(this.display, 268435456 /* INLINE_FLEX */) || - contains(this.display, 536870912 /* INLINE_GRID */) || - contains(this.display, 67108864 /* INLINE_LIST_ITEM */) || - contains(this.display, 134217728 /* INLINE_TABLE */)); - }; - return CSSParsedDeclaration; - }()); - var CSSParsedPseudoDeclaration = /** @class */ (function () { - function CSSParsedPseudoDeclaration(context, declaration) { - this.content = parse(context, content, declaration.content); - this.quotes = parse(context, quotes, declaration.quotes); - } - return CSSParsedPseudoDeclaration; - }()); - var CSSParsedCounterDeclaration = /** @class */ (function () { - function CSSParsedCounterDeclaration(context, declaration) { - this.counterIncrement = parse(context, counterIncrement, declaration.counterIncrement); - this.counterReset = parse(context, counterReset, declaration.counterReset); - } - return CSSParsedCounterDeclaration; - }()); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var parse = function (context, descriptor, style) { - var tokenizer = new Tokenizer(); - var value = style !== null && typeof style !== 'undefined' ? style.toString() : descriptor.initialValue; - tokenizer.write(value); - var parser = new Parser(tokenizer.read()); - switch (descriptor.type) { - case 2 /* IDENT_VALUE */: - var token = parser.parseComponentValue(); - return descriptor.parse(context, isIdentToken(token) ? token.value : descriptor.initialValue); - case 0 /* VALUE */: - return descriptor.parse(context, parser.parseComponentValue()); - case 1 /* LIST */: - return descriptor.parse(context, parser.parseComponentValues()); - case 4 /* TOKEN_VALUE */: - return parser.parseComponentValue(); - case 3 /* TYPE_VALUE */: - switch (descriptor.format) { - case 'angle': - return angle.parse(context, parser.parseComponentValue()); - case 'color': - return color$1.parse(context, parser.parseComponentValue()); - case 'image': - return image.parse(context, parser.parseComponentValue()); - case 'length': - var length_1 = parser.parseComponentValue(); - return isLength(length_1) ? length_1 : ZERO_LENGTH; - case 'length-percentage': - var value_1 = parser.parseComponentValue(); - return isLengthPercentage(value_1) ? value_1 : ZERO_LENGTH; - case 'time': - return time.parse(context, parser.parseComponentValue()); - } - break; - } - }; - - var elementDebuggerAttribute = 'data-html2canvas-debug'; - var getElementDebugType = function (element) { - var attribute = element.getAttribute(elementDebuggerAttribute); - switch (attribute) { - case 'all': - return 1 /* ALL */; - case 'clone': - return 2 /* CLONE */; - case 'parse': - return 3 /* PARSE */; - case 'render': - return 4 /* RENDER */; - default: - return 0 /* NONE */; - } - }; - var isDebugging = function (element, type) { - var elementType = getElementDebugType(element); - return elementType === 1 /* ALL */ || type === elementType; - }; - - var ElementContainer = /** @class */ (function () { - function ElementContainer(context, element) { - this.context = context; - this.textNodes = []; - this.elements = []; - this.flags = 0; - if (isDebugging(element, 3 /* PARSE */)) { - debugger; - } - this.styles = new CSSParsedDeclaration(context, window.getComputedStyle(element, null)); - if (isHTMLElementNode(element)) { - if (this.styles.animationDuration.some(function (duration) { return duration > 0; })) { - element.style.animationDuration = '0s'; - } - if (this.styles.transform !== null) { - // getBoundingClientRect takes transforms into account - element.style.transform = 'none'; - } - } - this.bounds = parseBounds(this.context, element); - if (isDebugging(element, 4 /* RENDER */)) { - this.flags |= 16 /* DEBUG_RENDER */; - } - } - return ElementContainer; - }()); - - /* - * text-segmentation 1.0.3 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var base64 = 'AAAAAAAAAAAAEA4AGBkAAFAaAAACAAAAAAAIABAAGAAwADgACAAQAAgAEAAIABAACAAQAAgAEAAIABAACAAQAAgAEAAIABAAQABIAEQATAAIABAACAAQAAgAEAAIABAAVABcAAgAEAAIABAACAAQAGAAaABwAHgAgACIAI4AlgAIABAAmwCjAKgAsAC2AL4AvQDFAMoA0gBPAVYBWgEIAAgACACMANoAYgFkAWwBdAF8AX0BhQGNAZUBlgGeAaMBlQGWAasBswF8AbsBwwF0AcsBYwHTAQgA2wG/AOMBdAF8AekB8QF0AfkB+wHiAHQBfAEIAAMC5gQIAAsCEgIIAAgAFgIeAggAIgIpAggAMQI5AkACygEIAAgASAJQAlgCYAIIAAgACAAKBQoFCgUTBRMFGQUrBSsFCAAIAAgACAAIAAgACAAIAAgACABdAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABoAmgCrwGvAQgAbgJ2AggAHgEIAAgACADnAXsCCAAIAAgAgwIIAAgACAAIAAgACACKAggAkQKZAggAPADJAAgAoQKkAqwCsgK6AsICCADJAggA0AIIAAgACAAIANYC3gIIAAgACAAIAAgACABAAOYCCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAkASoB+QIEAAgACAA8AEMCCABCBQgACABJBVAFCAAIAAgACAAIAAgACAAIAAgACABTBVoFCAAIAFoFCABfBWUFCAAIAAgACAAIAAgAbQUIAAgACAAIAAgACABzBXsFfQWFBYoFigWKBZEFigWKBYoFmAWfBaYFrgWxBbkFCAAIAAgACAAIAAgACAAIAAgACAAIAMEFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAMgFCADQBQgACAAIAAgACAAIAAgACAAIAAgACAAIAO4CCAAIAAgAiQAIAAgACABAAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAD0AggACAD8AggACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIANYFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAMDvwAIAAgAJAIIAAgACAAIAAgACAAIAAgACwMTAwgACAB9BOsEGwMjAwgAKwMyAwsFYgE3A/MEPwMIAEUDTQNRAwgAWQOsAGEDCAAIAAgACAAIAAgACABpAzQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFOgU0BTUFNgU3BTgFOQU6BTQFNQU2BTcFOAU5BToFNAU1BTYFNwU4BTkFIQUoBSwFCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABtAwgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABMAEwACAAIAAgACAAIABgACAAIAAgACAC/AAgACAAyAQgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACAAIAAwAAgACAAIAAgACAAIAAgACAAIAAAARABIAAgACAAIABQASAAIAAgAIABwAEAAjgCIABsAqAC2AL0AigDQAtwC+IJIQqVAZUBWQqVAZUBlQGVAZUBlQGrC5UBlQGVAZUBlQGVAZUBlQGVAXsKlQGVAbAK6wsrDGUMpQzlDJUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAZUBlQGVAfAKAAuZA64AtwCJALoC6ADwAAgAuACgA/oEpgO6AqsD+AAIAAgAswMIAAgACAAIAIkAuwP5AfsBwwPLAwgACAAIAAgACADRA9kDCAAIAOED6QMIAAgACAAIAAgACADuA/YDCAAIAP4DyQAIAAgABgQIAAgAXQAOBAgACAAIAAgACAAIABMECAAIAAgACAAIAAgACAD8AAQBCAAIAAgAGgQiBCoECAExBAgAEAEIAAgACAAIAAgACAAIAAgACAAIAAgACAA4BAgACABABEYECAAIAAgATAQYAQgAVAQIAAgACAAIAAgACAAIAAgACAAIAFoECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAOQEIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAB+BAcACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAEABhgSMBAgACAAIAAgAlAQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAwAEAAQABAADAAMAAwADAAQABAAEAAQABAAEAAQABHATAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAdQMIAAgACAAIAAgACAAIAMkACAAIAAgAfQMIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACACFA4kDCAAIAAgACAAIAOcBCAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAIcDCAAIAAgACAAIAAgACAAIAAgACAAIAJEDCAAIAAgACADFAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABgBAgAZgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAbAQCBXIECAAIAHkECAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACABAAJwEQACjBKoEsgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAC6BMIECAAIAAgACAAIAAgACABmBAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAxwQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAGYECAAIAAgAzgQIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBd0FXwUIAOIF6gXxBYoF3gT5BQAGCAaKBYoFigWKBYoFigWKBYoFigWKBYoFigXWBIoFigWKBYoFigWKBYoFigWKBYsFEAaKBYoFigWKBYoFigWKBRQGCACKBYoFigWKBQgACAAIANEECAAIABgGigUgBggAJgYIAC4GMwaKBYoF0wQ3Bj4GigWKBYoFigWKBYoFigWKBYoFigWKBYoFigUIAAgACAAIAAgACAAIAAgAigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWKBYoFigWLBf///////wQABAAEAAQABAAEAAQABAAEAAQAAwAEAAQAAgAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAQADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUAAAAFAAUAAAAFAAUAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUAAQAAAAUABQAFAAUABQAFAAAAAAAFAAUAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAFAAUAAQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAAABwAHAAcAAAAHAAcABwAFAAEAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAcABwAFAAUAAAAAAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAAAAQABAAAAAAAAAAAAAAAFAAUABQAFAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAHAAcAAAAHAAcAAAAAAAUABQAHAAUAAQAHAAEABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwABAAUABQAFAAUAAAAAAAAAAAAAAAEAAQABAAEAAQABAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABQANAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAEAAQABAAEAAQABAAEAAQABAAEAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAABQAHAAUABQAFAAAAAAAAAAcABQAFAAUABQAFAAQABAAEAAQABAAEAAQABAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAEAAQABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUAAAAFAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAUAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAcABwAFAAcABwAAAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUABwAHAAUABQAFAAUAAAAAAAcABwAAAAAABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAABQAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAAAAAAAAAAABQAFAAAAAAAFAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAFAAUABQAFAAUAAAAFAAUABwAAAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABwAFAAUABQAFAAAAAAAHAAcAAAAAAAcABwAFAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAAAAAAAAAHAAcABwAAAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAABQAHAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAUABQAFAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAHAAcABQAHAAcAAAAFAAcABwAAAAcABwAFAAUAAAAAAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAFAAcABwAFAAUABQAAAAUAAAAHAAcABwAHAAcABwAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAHAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAABwAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAUAAAAFAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABwAFAAUABQAFAAUAAAAFAAUAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABwAFAAUABQAFAAUABQAAAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABQAFAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABQAFAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAHAAUABQAFAAUABQAFAAUABwAHAAcABwAHAAcABwAHAAUABwAHAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABwAHAAcABwAFAAUABwAHAAcAAAAAAAAAAAAHAAcABQAHAAcABwAHAAcABwAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAcABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAUABQAFAAUABQAFAAUAAAAFAAAABQAAAAAABQAFAAUABQAFAAUABQAFAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAFAAUAAAAAAAUABQAFAAUABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABwAFAAcABwAHAAcABwAFAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAUABQAFAAUABwAHAAUABQAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABQAFAAcABwAHAAUABwAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAcABQAFAAUABQAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAAAAAABwAFAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAAAAAAAAAFAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAUABQAHAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAUABQAFAAUABQAHAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAcABwAFAAUABQAFAAcABwAFAAUABwAHAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAFAAcABwAFAAUABwAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAFAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAFAAUABQAAAAAABQAFAAAAAAAAAAAAAAAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAcABwAAAAAAAAAAAAAABwAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAcABwAFAAcABwAAAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAFAAUABQAAAAUABQAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABwAFAAUABQAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAUABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAHAAcABQAHAAUABQAAAAAAAAAAAAAAAAAFAAAABwAHAAcABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAHAAcABwAAAAAABwAHAAAAAAAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABwAHAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAFAAUABwAFAAcABwAFAAcABQAFAAcABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAHAAcABQAFAAUABQAAAAAABwAHAAcABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAHAAUABQAFAAUABQAFAAUABQAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABwAFAAcABwAFAAUABQAFAAUABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAcABwAFAAUABQAFAAcABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAUABQAFAAUABQAHAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAFAAUABQAFAAAAAAAFAAUABwAHAAcABwAFAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABwAHAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAcABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUAAAAHAAUABQAFAAUABQAFAAUABwAFAAUABwAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUAAAAAAAAABQAAAAUABQAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAHAAcABwAHAAcAAAAFAAUAAAAHAAcABQAHAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAAAAUABQAFAAAAAAAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAFAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAFAAUABQAAAAAABQAFAAUABQAFAAUABQAAAAUABQAAAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAUABQAFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAFAAUABQAFAAUABQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAFAAUABQAFAAUADgAOAA4ADgAOAA4ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAA8ADwAPAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAcABwAHAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAgACAAIAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAMAAwADAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkACQAJAAkAAAAAAAAAAAAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAKAAoACgAAAAAAAAAAAAsADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwACwAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAMAAwADAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAADgAOAA4AAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAAAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4AAAAOAAAAAAAAAAAAAAAAAA4AAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAAAAAAAAAAAA4AAAAOAAAAAAAAAAAADgAOAA4AAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4AAAAAAA4ADgAOAA4ADgAOAA4ADgAOAAAADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4ADgAOAAAAAAAAAAAAAAAAAAAAAAAAAAAADgAOAA4ADgAOAA4AAAAAAAAAAAAAAAAAAAAAAA4ADgAOAA4ADgAOAA4ADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAOAA4ADgAOAA4ADgAAAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4ADgAOAA4AAAAAAAAAAAA='; - - /* - * utrie 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars$1 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup$1 = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i$1 = 0; i$1 < chars$1.length; i$1++) { - lookup$1[chars$1.charCodeAt(i$1)] = i$1; - } - var decode = function (base64) { - var bufferLength = base64.length * 0.75, len = base64.length, i, p = 0, encoded1, encoded2, encoded3, encoded4; - if (base64[base64.length - 1] === '=') { - bufferLength--; - if (base64[base64.length - 2] === '=') { - bufferLength--; - } - } - var buffer = typeof ArrayBuffer !== 'undefined' && - typeof Uint8Array !== 'undefined' && - typeof Uint8Array.prototype.slice !== 'undefined' - ? new ArrayBuffer(bufferLength) - : new Array(bufferLength); - var bytes = Array.isArray(buffer) ? buffer : new Uint8Array(buffer); - for (i = 0; i < len; i += 4) { - encoded1 = lookup$1[base64.charCodeAt(i)]; - encoded2 = lookup$1[base64.charCodeAt(i + 1)]; - encoded3 = lookup$1[base64.charCodeAt(i + 2)]; - encoded4 = lookup$1[base64.charCodeAt(i + 3)]; - bytes[p++] = (encoded1 << 2) | (encoded2 >> 4); - bytes[p++] = ((encoded2 & 15) << 4) | (encoded3 >> 2); - bytes[p++] = ((encoded3 & 3) << 6) | (encoded4 & 63); - } - return buffer; - }; - var polyUint16Array = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 2) { - bytes.push((buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - var polyUint32Array = function (buffer) { - var length = buffer.length; - var bytes = []; - for (var i = 0; i < length; i += 4) { - bytes.push((buffer[i + 3] << 24) | (buffer[i + 2] << 16) | (buffer[i + 1] << 8) | buffer[i]); - } - return bytes; - }; - - /** Shift size for getting the index-2 table offset. */ - var UTRIE2_SHIFT_2 = 5; - /** Shift size for getting the index-1 table offset. */ - var UTRIE2_SHIFT_1 = 6 + 5; - /** - * Shift size for shifting left the index array values. - * Increases possible data size with 16-bit index values at the cost - * of compactability. - * This requires data blocks to be aligned by UTRIE2_DATA_GRANULARITY. - */ - var UTRIE2_INDEX_SHIFT = 2; - /** - * Difference between the two shift sizes, - * for getting an index-1 offset from an index-2 offset. 6=11-5 - */ - var UTRIE2_SHIFT_1_2 = UTRIE2_SHIFT_1 - UTRIE2_SHIFT_2; - /** - * The part of the index-2 table for U+D800..U+DBFF stores values for - * lead surrogate code _units_ not code _points_. - * Values for lead surrogate code _points_ are indexed with this portion of the table. - * Length=32=0x20=0x400>>UTRIE2_SHIFT_2. (There are 1024=0x400 lead surrogates.) - */ - var UTRIE2_LSCP_INDEX_2_OFFSET = 0x10000 >> UTRIE2_SHIFT_2; - /** Number of entries in a data block. 32=0x20 */ - var UTRIE2_DATA_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_2; - /** Mask for getting the lower bits for the in-data-block offset. */ - var UTRIE2_DATA_MASK = UTRIE2_DATA_BLOCK_LENGTH - 1; - var UTRIE2_LSCP_INDEX_2_LENGTH = 0x400 >> UTRIE2_SHIFT_2; - /** Count the lengths of both BMP pieces. 2080=0x820 */ - var UTRIE2_INDEX_2_BMP_LENGTH = UTRIE2_LSCP_INDEX_2_OFFSET + UTRIE2_LSCP_INDEX_2_LENGTH; - /** - * The 2-byte UTF-8 version of the index-2 table follows at offset 2080=0x820. - * Length 32=0x20 for lead bytes C0..DF, regardless of UTRIE2_SHIFT_2. - */ - var UTRIE2_UTF8_2B_INDEX_2_OFFSET = UTRIE2_INDEX_2_BMP_LENGTH; - var UTRIE2_UTF8_2B_INDEX_2_LENGTH = 0x800 >> 6; /* U+0800 is the first code point after 2-byte UTF-8 */ - /** - * The index-1 table, only used for supplementary code points, at offset 2112=0x840. - * Variable length, for code points up to highStart, where the last single-value range starts. - * Maximum length 512=0x200=0x100000>>UTRIE2_SHIFT_1. - * (For 0x100000 supplementary code points U+10000..U+10ffff.) - * - * The part of the index-2 table for supplementary code points starts - * after this index-1 table. - * - * Both the index-1 table and the following part of the index-2 table - * are omitted completely if there is only BMP data. - */ - var UTRIE2_INDEX_1_OFFSET = UTRIE2_UTF8_2B_INDEX_2_OFFSET + UTRIE2_UTF8_2B_INDEX_2_LENGTH; - /** - * Number of index-1 entries for the BMP. 32=0x20 - * This part of the index-1 table is omitted from the serialized form. - */ - var UTRIE2_OMITTED_BMP_INDEX_1_LENGTH = 0x10000 >> UTRIE2_SHIFT_1; - /** Number of entries in an index-2 block. 64=0x40 */ - var UTRIE2_INDEX_2_BLOCK_LENGTH = 1 << UTRIE2_SHIFT_1_2; - /** Mask for getting the lower bits for the in-index-2-block offset. */ - var UTRIE2_INDEX_2_MASK = UTRIE2_INDEX_2_BLOCK_LENGTH - 1; - var slice16 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint16Array(Array.prototype.slice.call(view, start, end)); - }; - var slice32 = function (view, start, end) { - if (view.slice) { - return view.slice(start, end); - } - return new Uint32Array(Array.prototype.slice.call(view, start, end)); - }; - var createTrieFromBase64 = function (base64, _byteLength) { - var buffer = decode(base64); - var view32 = Array.isArray(buffer) ? polyUint32Array(buffer) : new Uint32Array(buffer); - var view16 = Array.isArray(buffer) ? polyUint16Array(buffer) : new Uint16Array(buffer); - var headerLength = 24; - var index = slice16(view16, headerLength / 2, view32[4] / 2); - var data = view32[5] === 2 - ? slice16(view16, (headerLength + view32[4]) / 2) - : slice32(view32, Math.ceil((headerLength + view32[4]) / 4)); - return new Trie(view32[0], view32[1], view32[2], view32[3], index, data); - }; - var Trie = /** @class */ (function () { - function Trie(initialValue, errorValue, highStart, highValueIndex, index, data) { - this.initialValue = initialValue; - this.errorValue = errorValue; - this.highStart = highStart; - this.highValueIndex = highValueIndex; - this.index = index; - this.data = data; - } - /** - * Get the value for a code point as stored in the Trie. - * - * @param codePoint the code point - * @return the value - */ - Trie.prototype.get = function (codePoint) { - var ix; - if (codePoint >= 0) { - if (codePoint < 0x0d800 || (codePoint > 0x0dbff && codePoint <= 0x0ffff)) { - // Ordinary BMP code point, excluding leading surrogates. - // BMP uses a single level lookup. BMP index starts at offset 0 in the Trie2 index. - // 16 bit data is stored in the index array itself. - ix = this.index[codePoint >> UTRIE2_SHIFT_2]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint <= 0xffff) { - // Lead Surrogate Code Point. A Separate index section is stored for - // lead surrogate code units and code points. - // The main index has the code unit data. - // For this function, we need the code point data. - // Note: this expression could be refactored for slightly improved efficiency, but - // surrogate code points will be so rare in practice that it's not worth it. - ix = this.index[UTRIE2_LSCP_INDEX_2_OFFSET + ((codePoint - 0xd800) >> UTRIE2_SHIFT_2)]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint < this.highStart) { - // Supplemental code point, use two-level lookup. - ix = UTRIE2_INDEX_1_OFFSET - UTRIE2_OMITTED_BMP_INDEX_1_LENGTH + (codePoint >> UTRIE2_SHIFT_1); - ix = this.index[ix]; - ix += (codePoint >> UTRIE2_SHIFT_2) & UTRIE2_INDEX_2_MASK; - ix = this.index[ix]; - ix = (ix << UTRIE2_INDEX_SHIFT) + (codePoint & UTRIE2_DATA_MASK); - return this.data[ix]; - } - if (codePoint <= 0x10ffff) { - return this.data[this.highValueIndex]; - } - } - // Fall through. The code point is outside of the legal range of 0..0x10ffff. - return this.errorValue; - }; - return Trie; - }()); - - /* - * base64-arraybuffer 1.0.2 - * Copyright (c) 2022 Niklas von Hertzen - * Released under MIT License - */ - var chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'; - // Use a lookup table to find the index. - var lookup = typeof Uint8Array === 'undefined' ? [] : new Uint8Array(256); - for (var i = 0; i < chars.length; i++) { - lookup[chars.charCodeAt(i)] = i; - } - - var Prepend = 1; - var CR = 2; - var LF = 3; - var Control = 4; - var Extend = 5; - var SpacingMark = 7; - var L = 8; - var V = 9; - var T = 10; - var LV = 11; - var LVT = 12; - var ZWJ = 13; - var Extended_Pictographic = 14; - var RI = 15; - var toCodePoints = function (str) { - var codePoints = []; - var i = 0; - var length = str.length; - while (i < length) { - var value = str.charCodeAt(i++); - if (value >= 0xd800 && value <= 0xdbff && i < length) { - var extra = str.charCodeAt(i++); - if ((extra & 0xfc00) === 0xdc00) { - codePoints.push(((value & 0x3ff) << 10) + (extra & 0x3ff) + 0x10000); - } - else { - codePoints.push(value); - i--; - } - } - else { - codePoints.push(value); - } - } - return codePoints; - }; - var fromCodePoint = function () { - var codePoints = []; - for (var _i = 0; _i < arguments.length; _i++) { - codePoints[_i] = arguments[_i]; - } - if (String.fromCodePoint) { - return String.fromCodePoint.apply(String, codePoints); - } - var length = codePoints.length; - if (!length) { - return ''; - } - var codeUnits = []; - var index = -1; - var result = ''; - while (++index < length) { - var codePoint = codePoints[index]; - if (codePoint <= 0xffff) { - codeUnits.push(codePoint); - } - else { - codePoint -= 0x10000; - codeUnits.push((codePoint >> 10) + 0xd800, (codePoint % 0x400) + 0xdc00); - } - if (index + 1 === length || codeUnits.length > 0x4000) { - result += String.fromCharCode.apply(String, codeUnits); - codeUnits.length = 0; - } - } - return result; - }; - var UnicodeTrie = createTrieFromBase64(base64); - var BREAK_NOT_ALLOWED = '×'; - var BREAK_ALLOWED = '÷'; - var codePointToClass = function (codePoint) { return UnicodeTrie.get(codePoint); }; - var _graphemeBreakAtIndex = function (_codePoints, classTypes, index) { - var prevIndex = index - 2; - var prev = classTypes[prevIndex]; - var current = classTypes[index - 1]; - var next = classTypes[index]; - // GB3 Do not break between a CR and LF - if (current === CR && next === LF) { - return BREAK_NOT_ALLOWED; - } - // GB4 Otherwise, break before and after controls. - if (current === CR || current === LF || current === Control) { - return BREAK_ALLOWED; - } - // GB5 - if (next === CR || next === LF || next === Control) { - return BREAK_ALLOWED; - } - // Do not break Hangul syllable sequences. - // GB6 - if (current === L && [L, V, LV, LVT].indexOf(next) !== -1) { - return BREAK_NOT_ALLOWED; - } - // GB7 - if ((current === LV || current === V) && (next === V || next === T)) { - return BREAK_NOT_ALLOWED; - } - // GB8 - if ((current === LVT || current === T) && next === T) { - return BREAK_NOT_ALLOWED; - } - // GB9 Do not break before extending characters or ZWJ. - if (next === ZWJ || next === Extend) { - return BREAK_NOT_ALLOWED; - } - // Do not break before SpacingMarks, or after Prepend characters. - // GB9a - if (next === SpacingMark) { - return BREAK_NOT_ALLOWED; - } - // GB9a - if (current === Prepend) { - return BREAK_NOT_ALLOWED; - } - // GB11 Do not break within emoji modifier sequences or emoji zwj sequences. - if (current === ZWJ && next === Extended_Pictographic) { - while (prev === Extend) { - prev = classTypes[--prevIndex]; - } - if (prev === Extended_Pictographic) { - return BREAK_NOT_ALLOWED; - } - } - // GB12 Do not break within emoji flag sequences. - // That is, do not break between regional indicator (RI) symbols - // if there is an odd number of RI characters before the break point. - if (current === RI && next === RI) { - var countRI = 0; - while (prev === RI) { - countRI++; - prev = classTypes[--prevIndex]; - } - if (countRI % 2 === 0) { - return BREAK_NOT_ALLOWED; - } - } - return BREAK_ALLOWED; - }; - var GraphemeBreaker = function (str) { - var codePoints = toCodePoints(str); - var length = codePoints.length; - var index = 0; - var lastEnd = 0; - var classTypes = codePoints.map(codePointToClass); - return { - next: function () { - if (index >= length) { - return { done: true, value: null }; - } - var graphemeBreak = BREAK_NOT_ALLOWED; - while (index < length && - (graphemeBreak = _graphemeBreakAtIndex(codePoints, classTypes, ++index)) === BREAK_NOT_ALLOWED) { } - if (graphemeBreak !== BREAK_NOT_ALLOWED || index === length) { - var value = fromCodePoint.apply(null, codePoints.slice(lastEnd, index)); - lastEnd = index; - return { value: value, done: false }; - } - return { done: true, value: null }; - }, - }; - }; - var splitGraphemes = function (str) { - var breaker = GraphemeBreaker(str); - var graphemes = []; - var bk; - while (!(bk = breaker.next()).done) { - if (bk.value) { - graphemes.push(bk.value.slice()); - } - } - return graphemes; - }; - - var testRangeBounds = function (document) { - var TEST_HEIGHT = 123; - if (document.createRange) { - var range = document.createRange(); - if (range.getBoundingClientRect) { - var testElement = document.createElement('boundtest'); - testElement.style.height = TEST_HEIGHT + "px"; - testElement.style.display = 'block'; - document.body.appendChild(testElement); - range.selectNode(testElement); - var rangeBounds = range.getBoundingClientRect(); - var rangeHeight = Math.round(rangeBounds.height); - document.body.removeChild(testElement); - if (rangeHeight === TEST_HEIGHT) { - return true; - } - } - } - return false; - }; - var testIOSLineBreak = function (document) { - var testElement = document.createElement('boundtest'); - testElement.style.width = '50px'; - testElement.style.display = 'block'; - testElement.style.fontSize = '12px'; - testElement.style.letterSpacing = '0px'; - testElement.style.wordSpacing = '0px'; - document.body.appendChild(testElement); - var range = document.createRange(); - testElement.innerHTML = typeof ''.repeat === 'function' ? '👨'.repeat(10) : ''; - var node = testElement.firstChild; - var textList = toCodePoints$1(node.data).map(function (i) { return fromCodePoint$1(i); }); - var offset = 0; - var prev = {}; - // ios 13 does not handle range getBoundingClientRect line changes correctly #2177 - var supports = textList.every(function (text, i) { - range.setStart(node, offset); - range.setEnd(node, offset + text.length); - var rect = range.getBoundingClientRect(); - offset += text.length; - var boundAhead = rect.x > prev.x || rect.y > prev.y; - prev = rect; - if (i === 0) { - return true; - } - return boundAhead; - }); - document.body.removeChild(testElement); - return supports; - }; - var testCORS = function () { return typeof new Image().crossOrigin !== 'undefined'; }; - var testResponseType = function () { return typeof new XMLHttpRequest().responseType === 'string'; }; - var testSVG = function (document) { - var img = new Image(); - var canvas = document.createElement('canvas'); - var ctx = canvas.getContext('2d'); - if (!ctx) { - return false; - } - img.src = "data:image/svg+xml,"; - try { - ctx.drawImage(img, 0, 0); - canvas.toDataURL(); - } - catch (e) { - return false; - } - return true; - }; - var isGreenPixel = function (data) { - return data[0] === 0 && data[1] === 255 && data[2] === 0 && data[3] === 255; - }; - var testForeignObject = function (document) { - var canvas = document.createElement('canvas'); - var size = 100; - canvas.width = size; - canvas.height = size; - var ctx = canvas.getContext('2d'); - if (!ctx) { - return Promise.reject(false); - } - ctx.fillStyle = 'rgb(0, 255, 0)'; - ctx.fillRect(0, 0, size, size); - var img = new Image(); - var greenImageSrc = canvas.toDataURL(); - img.src = greenImageSrc; - var svg = createForeignObjectSVG(size, size, 0, 0, img); - ctx.fillStyle = 'red'; - ctx.fillRect(0, 0, size, size); - return loadSerializedSVG$1(svg) - .then(function (img) { - ctx.drawImage(img, 0, 0); - var data = ctx.getImageData(0, 0, size, size).data; - ctx.fillStyle = 'red'; - ctx.fillRect(0, 0, size, size); - var node = document.createElement('div'); - node.style.backgroundImage = "url(" + greenImageSrc + ")"; - node.style.height = size + "px"; - // Firefox 55 does not render inline tags - return isGreenPixel(data) - ? loadSerializedSVG$1(createForeignObjectSVG(size, size, 0, 0, node)) - : Promise.reject(false); - }) - .then(function (img) { - ctx.drawImage(img, 0, 0); - // Edge does not render background-images - return isGreenPixel(ctx.getImageData(0, 0, size, size).data); - }) - .catch(function () { return false; }); - }; - var createForeignObjectSVG = function (width, height, x, y, node) { - var xmlns = 'http://www.w3.org/2000/svg'; - var svg = document.createElementNS(xmlns, 'svg'); - var foreignObject = document.createElementNS(xmlns, 'foreignObject'); - svg.setAttributeNS(null, 'width', width.toString()); - svg.setAttributeNS(null, 'height', height.toString()); - foreignObject.setAttributeNS(null, 'width', '100%'); - foreignObject.setAttributeNS(null, 'height', '100%'); - foreignObject.setAttributeNS(null, 'x', x.toString()); - foreignObject.setAttributeNS(null, 'y', y.toString()); - foreignObject.setAttributeNS(null, 'externalResourcesRequired', 'true'); - svg.appendChild(foreignObject); - foreignObject.appendChild(node); - return svg; - }; - var loadSerializedSVG$1 = function (svg) { - return new Promise(function (resolve, reject) { - var img = new Image(); - img.onload = function () { return resolve(img); }; - img.onerror = reject; - img.src = "data:image/svg+xml;charset=utf-8," + encodeURIComponent(new XMLSerializer().serializeToString(svg)); - }); - }; - var FEATURES = { - get SUPPORT_RANGE_BOUNDS() { - var value = testRangeBounds(document); - Object.defineProperty(FEATURES, 'SUPPORT_RANGE_BOUNDS', { value: value }); - return value; - }, - get SUPPORT_WORD_BREAKING() { - var value = FEATURES.SUPPORT_RANGE_BOUNDS && testIOSLineBreak(document); - Object.defineProperty(FEATURES, 'SUPPORT_WORD_BREAKING', { value: value }); - return value; - }, - get SUPPORT_SVG_DRAWING() { - var value = testSVG(document); - Object.defineProperty(FEATURES, 'SUPPORT_SVG_DRAWING', { value: value }); - return value; - }, - get SUPPORT_FOREIGNOBJECT_DRAWING() { - var value = typeof Array.from === 'function' && typeof window.fetch === 'function' - ? testForeignObject(document) - : Promise.resolve(false); - Object.defineProperty(FEATURES, 'SUPPORT_FOREIGNOBJECT_DRAWING', { value: value }); - return value; - }, - get SUPPORT_CORS_IMAGES() { - var value = testCORS(); - Object.defineProperty(FEATURES, 'SUPPORT_CORS_IMAGES', { value: value }); - return value; - }, - get SUPPORT_RESPONSE_TYPE() { - var value = testResponseType(); - Object.defineProperty(FEATURES, 'SUPPORT_RESPONSE_TYPE', { value: value }); - return value; - }, - get SUPPORT_CORS_XHR() { - var value = 'withCredentials' in new XMLHttpRequest(); - Object.defineProperty(FEATURES, 'SUPPORT_CORS_XHR', { value: value }); - return value; - }, - get SUPPORT_NATIVE_TEXT_SEGMENTATION() { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var value = !!(typeof Intl !== 'undefined' && Intl.Segmenter); - Object.defineProperty(FEATURES, 'SUPPORT_NATIVE_TEXT_SEGMENTATION', { value: value }); - return value; - } - }; - - var TextBounds = /** @class */ (function () { - function TextBounds(text, bounds) { - this.text = text; - this.bounds = bounds; - } - return TextBounds; - }()); - var parseTextBounds = function (context, value, styles, node) { - var textList = breakText(value, styles); - var textBounds = []; - var offset = 0; - textList.forEach(function (text) { - if (styles.textDecorationLine.length || text.trim().length > 0) { - if (FEATURES.SUPPORT_RANGE_BOUNDS) { - var clientRects = createRange(node, offset, text.length).getClientRects(); - if (clientRects.length > 1) { - var subSegments = segmentGraphemes(text); - var subOffset_1 = 0; - subSegments.forEach(function (subSegment) { - textBounds.push(new TextBounds(subSegment, Bounds.fromDOMRectList(context, createRange(node, subOffset_1 + offset, subSegment.length).getClientRects()))); - subOffset_1 += subSegment.length; - }); - } - else { - textBounds.push(new TextBounds(text, Bounds.fromDOMRectList(context, clientRects))); - } - } - else { - var replacementNode = node.splitText(text.length); - textBounds.push(new TextBounds(text, getWrapperBounds(context, node))); - node = replacementNode; - } - } - else if (!FEATURES.SUPPORT_RANGE_BOUNDS) { - node = node.splitText(text.length); - } - offset += text.length; - }); - return textBounds; - }; - var getWrapperBounds = function (context, node) { - var ownerDocument = node.ownerDocument; - if (ownerDocument) { - var wrapper = ownerDocument.createElement('html2canvaswrapper'); - wrapper.appendChild(node.cloneNode(true)); - var parentNode = node.parentNode; - if (parentNode) { - parentNode.replaceChild(wrapper, node); - var bounds = parseBounds(context, wrapper); - if (wrapper.firstChild) { - parentNode.replaceChild(wrapper.firstChild, wrapper); - } - return bounds; - } - } - return Bounds.EMPTY; - }; - var createRange = function (node, offset, length) { - var ownerDocument = node.ownerDocument; - if (!ownerDocument) { - throw new Error('Node has no owner document'); - } - var range = ownerDocument.createRange(); - range.setStart(node, offset); - range.setEnd(node, offset + length); - return range; - }; - var segmentGraphemes = function (value) { - if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var segmenter = new Intl.Segmenter(void 0, { granularity: 'grapheme' }); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; }); - } - return splitGraphemes(value); - }; - var segmentWords = function (value, styles) { - if (FEATURES.SUPPORT_NATIVE_TEXT_SEGMENTATION) { - // eslint-disable-next-line @typescript-eslint/no-explicit-any - var segmenter = new Intl.Segmenter(void 0, { - granularity: 'word' - }); - // eslint-disable-next-line @typescript-eslint/no-explicit-any - return Array.from(segmenter.segment(value)).map(function (segment) { return segment.segment; }); - } - return breakWords(value, styles); - }; - var breakText = function (value, styles) { - return styles.letterSpacing !== 0 ? segmentGraphemes(value) : segmentWords(value, styles); - }; - // https://drafts.csswg.org/css-text/#word-separator - var wordSeparators = [0x0020, 0x00a0, 0x1361, 0x10100, 0x10101, 0x1039, 0x1091]; - var breakWords = function (str, styles) { - var breaker = LineBreaker(str, { - lineBreak: styles.lineBreak, - wordBreak: styles.overflowWrap === "break-word" /* BREAK_WORD */ ? 'break-word' : styles.wordBreak - }); - var words = []; - var bk; - var _loop_1 = function () { - if (bk.value) { - var value = bk.value.slice(); - var codePoints = toCodePoints$1(value); - var word_1 = ''; - codePoints.forEach(function (codePoint) { - if (wordSeparators.indexOf(codePoint) === -1) { - word_1 += fromCodePoint$1(codePoint); - } - else { - if (word_1.length) { - words.push(word_1); - } - words.push(fromCodePoint$1(codePoint)); - word_1 = ''; - } - }); - if (word_1.length) { - words.push(word_1); - } - } - }; - while (!(bk = breaker.next()).done) { - _loop_1(); - } - return words; - }; - - var TextContainer = /** @class */ (function () { - function TextContainer(context, node, styles) { - this.text = transform(node.data, styles.textTransform); - this.textBounds = parseTextBounds(context, this.text, styles, node); - } - return TextContainer; - }()); - var transform = function (text, transform) { - switch (transform) { - case 1 /* LOWERCASE */: - return text.toLowerCase(); - case 3 /* CAPITALIZE */: - return text.replace(CAPITALIZE, capitalize); - case 2 /* UPPERCASE */: - return text.toUpperCase(); - default: - return text; - } - }; - var CAPITALIZE = /(^|\s|:|-|\(|\))([a-z])/g; - var capitalize = function (m, p1, p2) { - if (m.length > 0) { - return p1 + p2.toUpperCase(); - } - return m; - }; - - var ImageElementContainer = /** @class */ (function (_super) { - __extends(ImageElementContainer, _super); - function ImageElementContainer(context, img) { - var _this = _super.call(this, context, img) || this; - _this.src = img.currentSrc || img.src; - _this.intrinsicWidth = img.naturalWidth; - _this.intrinsicHeight = img.naturalHeight; - _this.context.cache.addImage(_this.src); - return _this; - } - return ImageElementContainer; - }(ElementContainer)); - - var CanvasElementContainer = /** @class */ (function (_super) { - __extends(CanvasElementContainer, _super); - function CanvasElementContainer(context, canvas) { - var _this = _super.call(this, context, canvas) || this; - _this.canvas = canvas; - _this.intrinsicWidth = canvas.width; - _this.intrinsicHeight = canvas.height; - return _this; - } - return CanvasElementContainer; - }(ElementContainer)); - - var SVGElementContainer = /** @class */ (function (_super) { - __extends(SVGElementContainer, _super); - function SVGElementContainer(context, img) { - var _this = _super.call(this, context, img) || this; - var s = new XMLSerializer(); - var bounds = parseBounds(context, img); - img.setAttribute('width', bounds.width + "px"); - img.setAttribute('height', bounds.height + "px"); - _this.svg = "data:image/svg+xml," + encodeURIComponent(s.serializeToString(img)); - _this.intrinsicWidth = img.width.baseVal.value; - _this.intrinsicHeight = img.height.baseVal.value; - _this.context.cache.addImage(_this.svg); - return _this; - } - return SVGElementContainer; - }(ElementContainer)); - - var LIElementContainer = /** @class */ (function (_super) { - __extends(LIElementContainer, _super); - function LIElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.value = element.value; - return _this; - } - return LIElementContainer; - }(ElementContainer)); - - var OLElementContainer = /** @class */ (function (_super) { - __extends(OLElementContainer, _super); - function OLElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.start = element.start; - _this.reversed = typeof element.reversed === 'boolean' && element.reversed === true; - return _this; - } - return OLElementContainer; - }(ElementContainer)); - - var CHECKBOX_BORDER_RADIUS = [ - { - type: 15 /* DIMENSION_TOKEN */, - flags: 0, - unit: 'px', - number: 3 - } - ]; - var RADIO_BORDER_RADIUS = [ - { - type: 16 /* PERCENTAGE_TOKEN */, - flags: 0, - number: 50 - } - ]; - var reformatInputBounds = function (bounds) { - if (bounds.width > bounds.height) { - return new Bounds(bounds.left + (bounds.width - bounds.height) / 2, bounds.top, bounds.height, bounds.height); - } - else if (bounds.width < bounds.height) { - return new Bounds(bounds.left, bounds.top + (bounds.height - bounds.width) / 2, bounds.width, bounds.width); - } - return bounds; - }; - var getInputValue = function (node) { - var value = node.type === PASSWORD ? new Array(node.value.length + 1).join('\u2022') : node.value; - return value.length === 0 ? node.placeholder || '' : value; - }; - var CHECKBOX = 'checkbox'; - var RADIO = 'radio'; - var PASSWORD = 'password'; - var INPUT_COLOR = 0x2a2a2aff; - var InputElementContainer = /** @class */ (function (_super) { - __extends(InputElementContainer, _super); - function InputElementContainer(context, input) { - var _this = _super.call(this, context, input) || this; - _this.type = input.type.toLowerCase(); - _this.checked = input.checked; - _this.value = getInputValue(input); - if (_this.type === CHECKBOX || _this.type === RADIO) { - _this.styles.backgroundColor = 0xdededeff; - _this.styles.borderTopColor = - _this.styles.borderRightColor = - _this.styles.borderBottomColor = - _this.styles.borderLeftColor = - 0xa5a5a5ff; - _this.styles.borderTopWidth = - _this.styles.borderRightWidth = - _this.styles.borderBottomWidth = - _this.styles.borderLeftWidth = - 1; - _this.styles.borderTopStyle = - _this.styles.borderRightStyle = - _this.styles.borderBottomStyle = - _this.styles.borderLeftStyle = - 1 /* SOLID */; - _this.styles.backgroundClip = [0 /* BORDER_BOX */]; - _this.styles.backgroundOrigin = [0 /* BORDER_BOX */]; - _this.bounds = reformatInputBounds(_this.bounds); - } - switch (_this.type) { - case CHECKBOX: - _this.styles.borderTopRightRadius = - _this.styles.borderTopLeftRadius = - _this.styles.borderBottomRightRadius = - _this.styles.borderBottomLeftRadius = - CHECKBOX_BORDER_RADIUS; - break; - case RADIO: - _this.styles.borderTopRightRadius = - _this.styles.borderTopLeftRadius = - _this.styles.borderBottomRightRadius = - _this.styles.borderBottomLeftRadius = - RADIO_BORDER_RADIUS; - break; - } - return _this; - } - return InputElementContainer; - }(ElementContainer)); - - var SelectElementContainer = /** @class */ (function (_super) { - __extends(SelectElementContainer, _super); - function SelectElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - var option = element.options[element.selectedIndex || 0]; - _this.value = option ? option.text || '' : ''; - return _this; - } - return SelectElementContainer; - }(ElementContainer)); - - var TextareaElementContainer = /** @class */ (function (_super) { - __extends(TextareaElementContainer, _super); - function TextareaElementContainer(context, element) { - var _this = _super.call(this, context, element) || this; - _this.value = element.value; - return _this; - } - return TextareaElementContainer; - }(ElementContainer)); - - var IFrameElementContainer = /** @class */ (function (_super) { - __extends(IFrameElementContainer, _super); - function IFrameElementContainer(context, iframe) { - var _this = _super.call(this, context, iframe) || this; - _this.src = iframe.src; - _this.width = parseInt(iframe.width, 10) || 0; - _this.height = parseInt(iframe.height, 10) || 0; - _this.backgroundColor = _this.styles.backgroundColor; - try { - if (iframe.contentWindow && - iframe.contentWindow.document && - iframe.contentWindow.document.documentElement) { - _this.tree = parseTree(context, iframe.contentWindow.document.documentElement); - // http://www.w3.org/TR/css3-background/#special-backgrounds - var documentBackgroundColor = iframe.contentWindow.document.documentElement - ? parseColor(context, getComputedStyle(iframe.contentWindow.document.documentElement).backgroundColor) - : COLORS.TRANSPARENT; - var bodyBackgroundColor = iframe.contentWindow.document.body - ? parseColor(context, getComputedStyle(iframe.contentWindow.document.body).backgroundColor) - : COLORS.TRANSPARENT; - _this.backgroundColor = isTransparent(documentBackgroundColor) - ? isTransparent(bodyBackgroundColor) - ? _this.styles.backgroundColor - : bodyBackgroundColor - : documentBackgroundColor; - } - } - catch (e) { } - return _this; - } - return IFrameElementContainer; - }(ElementContainer)); - - var LIST_OWNERS = ['OL', 'UL', 'MENU']; - var parseNodeTree = function (context, node, parent, root) { - for (var childNode = node.firstChild, nextNode = void 0; childNode; childNode = nextNode) { - nextNode = childNode.nextSibling; - if (isTextNode(childNode) && childNode.data.trim().length > 0) { - parent.textNodes.push(new TextContainer(context, childNode, parent.styles)); - } - else if (isElementNode(childNode)) { - if (isSlotElement(childNode) && childNode.assignedNodes) { - childNode.assignedNodes().forEach(function (childNode) { return parseNodeTree(context, childNode, parent, root); }); - } - else { - var container = createContainer(context, childNode); - if (container.styles.isVisible()) { - if (createsRealStackingContext(childNode, container, root)) { - container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */; - } - else if (createsStackingContext(container.styles)) { - container.flags |= 2 /* CREATES_STACKING_CONTEXT */; - } - if (LIST_OWNERS.indexOf(childNode.tagName) !== -1) { - container.flags |= 8 /* IS_LIST_OWNER */; - } - parent.elements.push(container); - childNode.slot; - if (childNode.shadowRoot) { - parseNodeTree(context, childNode.shadowRoot, container, root); - } - else if (!isTextareaElement(childNode) && - !isSVGElement(childNode) && - !isSelectElement(childNode)) { - parseNodeTree(context, childNode, container, root); - } - } - } - } - } - }; - var createContainer = function (context, element) { - if (isImageElement(element)) { - return new ImageElementContainer(context, element); - } - if (isCanvasElement(element)) { - return new CanvasElementContainer(context, element); - } - if (isSVGElement(element)) { - return new SVGElementContainer(context, element); - } - if (isLIElement(element)) { - return new LIElementContainer(context, element); - } - if (isOLElement(element)) { - return new OLElementContainer(context, element); - } - if (isInputElement(element)) { - return new InputElementContainer(context, element); - } - if (isSelectElement(element)) { - return new SelectElementContainer(context, element); - } - if (isTextareaElement(element)) { - return new TextareaElementContainer(context, element); - } - if (isIFrameElement(element)) { - return new IFrameElementContainer(context, element); - } - return new ElementContainer(context, element); - }; - var parseTree = function (context, element) { - var container = createContainer(context, element); - container.flags |= 4 /* CREATES_REAL_STACKING_CONTEXT */; - parseNodeTree(context, element, container, container); - return container; - }; - var createsRealStackingContext = function (node, container, root) { - return (container.styles.isPositionedWithZIndex() || - container.styles.opacity < 1 || - container.styles.isTransformed() || - (isBodyElement(node) && root.styles.isTransparent())); - }; - var createsStackingContext = function (styles) { return styles.isPositioned() || styles.isFloating(); }; - var isTextNode = function (node) { return node.nodeType === Node.TEXT_NODE; }; - var isElementNode = function (node) { return node.nodeType === Node.ELEMENT_NODE; }; - var isHTMLElementNode = function (node) { - return isElementNode(node) && typeof node.style !== 'undefined' && !isSVGElementNode(node); - }; - var isSVGElementNode = function (element) { - return typeof element.className === 'object'; - }; - var isLIElement = function (node) { return node.tagName === 'LI'; }; - var isOLElement = function (node) { return node.tagName === 'OL'; }; - var isInputElement = function (node) { return node.tagName === 'INPUT'; }; - var isHTMLElement = function (node) { return node.tagName === 'HTML'; }; - var isSVGElement = function (node) { return node.tagName === 'svg'; }; - var isBodyElement = function (node) { return node.tagName === 'BODY'; }; - var isCanvasElement = function (node) { return node.tagName === 'CANVAS'; }; - var isVideoElement = function (node) { return node.tagName === 'VIDEO'; }; - var isImageElement = function (node) { return node.tagName === 'IMG'; }; - var isIFrameElement = function (node) { return node.tagName === 'IFRAME'; }; - var isStyleElement = function (node) { return node.tagName === 'STYLE'; }; - var isScriptElement = function (node) { return node.tagName === 'SCRIPT'; }; - var isTextareaElement = function (node) { return node.tagName === 'TEXTAREA'; }; - var isSelectElement = function (node) { return node.tagName === 'SELECT'; }; - var isSlotElement = function (node) { return node.tagName === 'SLOT'; }; - // https://html.spec.whatwg.org/multipage/custom-elements.html#valid-custom-element-name - var isCustomElement = function (node) { return node.tagName.indexOf('-') > 0; }; - - var CounterState = /** @class */ (function () { - function CounterState() { - this.counters = {}; - } - CounterState.prototype.getCounterValue = function (name) { - var counter = this.counters[name]; - if (counter && counter.length) { - return counter[counter.length - 1]; - } - return 1; - }; - CounterState.prototype.getCounterValues = function (name) { - var counter = this.counters[name]; - return counter ? counter : []; - }; - CounterState.prototype.pop = function (counters) { - var _this = this; - counters.forEach(function (counter) { return _this.counters[counter].pop(); }); - }; - CounterState.prototype.parse = function (style) { - var _this = this; - var counterIncrement = style.counterIncrement; - var counterReset = style.counterReset; - var canReset = true; - if (counterIncrement !== null) { - counterIncrement.forEach(function (entry) { - var counter = _this.counters[entry.counter]; - if (counter && entry.increment !== 0) { - canReset = false; - if (!counter.length) { - counter.push(1); - } - counter[Math.max(0, counter.length - 1)] += entry.increment; - } - }); - } - var counterNames = []; - if (canReset) { - counterReset.forEach(function (entry) { - var counter = _this.counters[entry.counter]; - counterNames.push(entry.counter); - if (!counter) { - counter = _this.counters[entry.counter] = []; - } - counter.push(entry.reset); - }); - } - return counterNames; - }; - return CounterState; - }()); - var ROMAN_UPPER = { - integers: [1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1], - values: ['M', 'CM', 'D', 'CD', 'C', 'XC', 'L', 'XL', 'X', 'IX', 'V', 'IV', 'I'] - }; - var ARMENIAN = { - integers: [ - 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90, 80, 70, - 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'Ք', - 'Փ', - 'Ւ', - 'Ց', - 'Ր', - 'Տ', - 'Վ', - 'Ս', - 'Ռ', - 'Ջ', - 'Պ', - 'Չ', - 'Ո', - 'Շ', - 'Ն', - 'Յ', - 'Մ', - 'Ճ', - 'Ղ', - 'Ձ', - 'Հ', - 'Կ', - 'Ծ', - 'Խ', - 'Լ', - 'Ի', - 'Ժ', - 'Թ', - 'Ը', - 'Է', - 'Զ', - 'Ե', - 'Դ', - 'Գ', - 'Բ', - 'Ա' - ] - }; - var HEBREW = { - integers: [ - 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 400, 300, 200, 100, 90, 80, 70, 60, 50, 40, 30, 20, - 19, 18, 17, 16, 15, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'י׳', - 'ט׳', - 'ח׳', - 'ז׳', - 'ו׳', - 'ה׳', - 'ד׳', - 'ג׳', - 'ב׳', - 'א׳', - 'ת', - 'ש', - 'ר', - 'ק', - 'צ', - 'פ', - 'ע', - 'ס', - 'נ', - 'מ', - 'ל', - 'כ', - 'יט', - 'יח', - 'יז', - 'טז', - 'טו', - 'י', - 'ט', - 'ח', - 'ז', - 'ו', - 'ה', - 'ד', - 'ג', - 'ב', - 'א' - ] - }; - var GEORGIAN = { - integers: [ - 10000, 9000, 8000, 7000, 6000, 5000, 4000, 3000, 2000, 1000, 900, 800, 700, 600, 500, 400, 300, 200, 100, 90, - 80, 70, 60, 50, 40, 30, 20, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1 - ], - values: [ - 'ჵ', - 'ჰ', - 'ჯ', - 'ჴ', - 'ხ', - 'ჭ', - 'წ', - 'ძ', - 'ც', - 'ჩ', - 'შ', - 'ყ', - 'ღ', - 'ქ', - 'ფ', - 'ჳ', - 'ტ', - 'ს', - 'რ', - 'ჟ', - 'პ', - 'ო', - 'ჲ', - 'ნ', - 'მ', - 'ლ', - 'კ', - 'ი', - 'თ', - 'ჱ', - 'ზ', - 'ვ', - 'ე', - 'დ', - 'გ', - 'ბ', - 'ა' - ] - }; - var createAdditiveCounter = function (value, min, max, symbols, fallback, suffix) { - if (value < min || value > max) { - return createCounterText(value, fallback, suffix.length > 0); - } - return (symbols.integers.reduce(function (string, integer, index) { - while (value >= integer) { - value -= integer; - string += symbols.values[index]; - } - return string; - }, '') + suffix); - }; - var createCounterStyleWithSymbolResolver = function (value, codePointRangeLength, isNumeric, resolver) { - var string = ''; - do { - if (!isNumeric) { - value--; - } - string = resolver(value) + string; - value /= codePointRangeLength; - } while (value * codePointRangeLength >= codePointRangeLength); - return string; - }; - var createCounterStyleFromRange = function (value, codePointRangeStart, codePointRangeEnd, isNumeric, suffix) { - var codePointRangeLength = codePointRangeEnd - codePointRangeStart + 1; - return ((value < 0 ? '-' : '') + - (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, isNumeric, function (codePoint) { - return fromCodePoint$1(Math.floor(codePoint % codePointRangeLength) + codePointRangeStart); - }) + - suffix)); - }; - var createCounterStyleFromSymbols = function (value, symbols, suffix) { - if (suffix === void 0) { suffix = '. '; } - var codePointRangeLength = symbols.length; - return (createCounterStyleWithSymbolResolver(Math.abs(value), codePointRangeLength, false, function (codePoint) { return symbols[Math.floor(codePoint % codePointRangeLength)]; }) + suffix); - }; - var CJK_ZEROS = 1 << 0; - var CJK_TEN_COEFFICIENTS = 1 << 1; - var CJK_TEN_HIGH_COEFFICIENTS = 1 << 2; - var CJK_HUNDRED_COEFFICIENTS = 1 << 3; - var createCJKCounter = function (value, numbers, multipliers, negativeSign, suffix, flags) { - if (value < -9999 || value > 9999) { - return createCounterText(value, 4 /* CJK_DECIMAL */, suffix.length > 0); - } - var tmp = Math.abs(value); - var string = suffix; - if (tmp === 0) { - return numbers[0] + string; - } - for (var digit = 0; tmp > 0 && digit <= 4; digit++) { - var coefficient = tmp % 10; - if (coefficient === 0 && contains(flags, CJK_ZEROS) && string !== '') { - string = numbers[coefficient] + string; - } - else if (coefficient > 1 || - (coefficient === 1 && digit === 0) || - (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_COEFFICIENTS)) || - (coefficient === 1 && digit === 1 && contains(flags, CJK_TEN_HIGH_COEFFICIENTS) && value > 100) || - (coefficient === 1 && digit > 1 && contains(flags, CJK_HUNDRED_COEFFICIENTS))) { - string = numbers[coefficient] + (digit > 0 ? multipliers[digit - 1] : '') + string; - } - else if (coefficient === 1 && digit > 0) { - string = multipliers[digit - 1] + string; - } - tmp = Math.floor(tmp / 10); - } - return (value < 0 ? negativeSign : '') + string; - }; - var CHINESE_INFORMAL_MULTIPLIERS = '十百千萬'; - var CHINESE_FORMAL_MULTIPLIERS = '拾佰仟萬'; - var JAPANESE_NEGATIVE = 'マイナス'; - var KOREAN_NEGATIVE = '마이너스'; - var createCounterText = function (value, type, appendSuffix) { - var defaultSuffix = appendSuffix ? '. ' : ''; - var cjkSuffix = appendSuffix ? '、' : ''; - var koreanSuffix = appendSuffix ? ', ' : ''; - var spaceSuffix = appendSuffix ? ' ' : ''; - switch (type) { - case 0 /* DISC */: - return '•' + spaceSuffix; - case 1 /* CIRCLE */: - return '◦' + spaceSuffix; - case 2 /* SQUARE */: - return '◾' + spaceSuffix; - case 5 /* DECIMAL_LEADING_ZERO */: - var string = createCounterStyleFromRange(value, 48, 57, true, defaultSuffix); - return string.length < 4 ? "0" + string : string; - case 4 /* CJK_DECIMAL */: - return createCounterStyleFromSymbols(value, '〇一二三四五六七八九', cjkSuffix); - case 6 /* LOWER_ROMAN */: - return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix).toLowerCase(); - case 7 /* UPPER_ROMAN */: - return createAdditiveCounter(value, 1, 3999, ROMAN_UPPER, 3 /* DECIMAL */, defaultSuffix); - case 8 /* LOWER_GREEK */: - return createCounterStyleFromRange(value, 945, 969, false, defaultSuffix); - case 9 /* LOWER_ALPHA */: - return createCounterStyleFromRange(value, 97, 122, false, defaultSuffix); - case 10 /* UPPER_ALPHA */: - return createCounterStyleFromRange(value, 65, 90, false, defaultSuffix); - case 11 /* ARABIC_INDIC */: - return createCounterStyleFromRange(value, 1632, 1641, true, defaultSuffix); - case 12 /* ARMENIAN */: - case 49 /* UPPER_ARMENIAN */: - return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix); - case 35 /* LOWER_ARMENIAN */: - return createAdditiveCounter(value, 1, 9999, ARMENIAN, 3 /* DECIMAL */, defaultSuffix).toLowerCase(); - case 13 /* BENGALI */: - return createCounterStyleFromRange(value, 2534, 2543, true, defaultSuffix); - case 14 /* CAMBODIAN */: - case 30 /* KHMER */: - return createCounterStyleFromRange(value, 6112, 6121, true, defaultSuffix); - case 15 /* CJK_EARTHLY_BRANCH */: - return createCounterStyleFromSymbols(value, '子丑寅卯辰巳午未申酉戌亥', cjkSuffix); - case 16 /* CJK_HEAVENLY_STEM */: - return createCounterStyleFromSymbols(value, '甲乙丙丁戊己庚辛壬癸', cjkSuffix); - case 17 /* CJK_IDEOGRAPHIC */: - case 48 /* TRAD_CHINESE_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 47 /* TRAD_CHINESE_FORMAL */: - return createCJKCounter(value, '零壹貳參肆伍陸柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '負', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 42 /* SIMP_CHINESE_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', CHINESE_INFORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 41 /* SIMP_CHINESE_FORMAL */: - return createCJKCounter(value, '零壹贰叁肆伍陆柒捌玖', CHINESE_FORMAL_MULTIPLIERS, '负', cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS | CJK_HUNDRED_COEFFICIENTS); - case 26 /* JAPANESE_INFORMAL */: - return createCJKCounter(value, '〇一二三四五六七八九', '十百千万', JAPANESE_NEGATIVE, cjkSuffix, 0); - case 25 /* JAPANESE_FORMAL */: - return createCJKCounter(value, '零壱弐参四伍六七八九', '拾百千万', JAPANESE_NEGATIVE, cjkSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 31 /* KOREAN_HANGUL_FORMAL */: - return createCJKCounter(value, '영일이삼사오육칠팔구', '십백천만', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 33 /* KOREAN_HANJA_INFORMAL */: - return createCJKCounter(value, '零一二三四五六七八九', '十百千萬', KOREAN_NEGATIVE, koreanSuffix, 0); - case 32 /* KOREAN_HANJA_FORMAL */: - return createCJKCounter(value, '零壹貳參四五六七八九', '拾百千', KOREAN_NEGATIVE, koreanSuffix, CJK_ZEROS | CJK_TEN_COEFFICIENTS | CJK_TEN_HIGH_COEFFICIENTS); - case 18 /* DEVANAGARI */: - return createCounterStyleFromRange(value, 0x966, 0x96f, true, defaultSuffix); - case 20 /* GEORGIAN */: - return createAdditiveCounter(value, 1, 19999, GEORGIAN, 3 /* DECIMAL */, defaultSuffix); - case 21 /* GUJARATI */: - return createCounterStyleFromRange(value, 0xae6, 0xaef, true, defaultSuffix); - case 22 /* GURMUKHI */: - return createCounterStyleFromRange(value, 0xa66, 0xa6f, true, defaultSuffix); - case 22 /* HEBREW */: - return createAdditiveCounter(value, 1, 10999, HEBREW, 3 /* DECIMAL */, defaultSuffix); - case 23 /* HIRAGANA */: - return createCounterStyleFromSymbols(value, 'あいうえおかきくけこさしすせそたちつてとなにぬねのはひふへほまみむめもやゆよらりるれろわゐゑをん'); - case 24 /* HIRAGANA_IROHA */: - return createCounterStyleFromSymbols(value, 'いろはにほへとちりぬるをわかよたれそつねならむうゐのおくやまけふこえてあさきゆめみしゑひもせす'); - case 27 /* KANNADA */: - return createCounterStyleFromRange(value, 0xce6, 0xcef, true, defaultSuffix); - case 28 /* KATAKANA */: - return createCounterStyleFromSymbols(value, 'アイウエオカキクケコサシスセソタチツテトナニヌネノハヒフヘホマミムメモヤユヨラリルレロワヰヱヲン', cjkSuffix); - case 29 /* KATAKANA_IROHA */: - return createCounterStyleFromSymbols(value, 'イロハニホヘトチリヌルヲワカヨタレソツネナラムウヰノオクヤマケフコエテアサキユメミシヱヒモセス', cjkSuffix); - case 34 /* LAO */: - return createCounterStyleFromRange(value, 0xed0, 0xed9, true, defaultSuffix); - case 37 /* MONGOLIAN */: - return createCounterStyleFromRange(value, 0x1810, 0x1819, true, defaultSuffix); - case 38 /* MYANMAR */: - return createCounterStyleFromRange(value, 0x1040, 0x1049, true, defaultSuffix); - case 39 /* ORIYA */: - return createCounterStyleFromRange(value, 0xb66, 0xb6f, true, defaultSuffix); - case 40 /* PERSIAN */: - return createCounterStyleFromRange(value, 0x6f0, 0x6f9, true, defaultSuffix); - case 43 /* TAMIL */: - return createCounterStyleFromRange(value, 0xbe6, 0xbef, true, defaultSuffix); - case 44 /* TELUGU */: - return createCounterStyleFromRange(value, 0xc66, 0xc6f, true, defaultSuffix); - case 45 /* THAI */: - return createCounterStyleFromRange(value, 0xe50, 0xe59, true, defaultSuffix); - case 46 /* TIBETAN */: - return createCounterStyleFromRange(value, 0xf20, 0xf29, true, defaultSuffix); - case 3 /* DECIMAL */: - default: - return createCounterStyleFromRange(value, 48, 57, true, defaultSuffix); - } - }; - - var IGNORE_ATTRIBUTE = 'data-html2canvas-ignore'; - var DocumentCloner = /** @class */ (function () { - function DocumentCloner(context, element, options) { - this.context = context; - this.options = options; - this.scrolledElements = []; - this.referenceElement = element; - this.counters = new CounterState(); - this.quoteDepth = 0; - if (!element.ownerDocument) { - throw new Error('Cloned element does not have an owner document'); - } - this.documentElement = this.cloneNode(element.ownerDocument.documentElement, false); - } - DocumentCloner.prototype.toIFrame = function (ownerDocument, windowSize) { - var _this = this; - var iframe = createIFrameContainer(ownerDocument, windowSize); - if (!iframe.contentWindow) { - return Promise.reject("Unable to find iframe window"); - } - var scrollX = ownerDocument.defaultView.pageXOffset; - var scrollY = ownerDocument.defaultView.pageYOffset; - var cloneWindow = iframe.contentWindow; - var documentClone = cloneWindow.document; - /* Chrome doesn't detect relative background-images assigned in inline